11 datasets found
  1. f

    Data from: OpenColab project: OpenSim in Google colaboratory to explore...

    • tandf.figshare.com
    docx
    Updated Jul 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour (2023). OpenColab project: OpenSim in Google colaboratory to explore biomechanics on the web [Dataset]. http://doi.org/10.6084/m9.figshare.20440340.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jul 6, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    OpenSim is an open-source biomechanical package with a variety of applications. It is available for many users with bindings in MATLAB, Python, and Java via its application programming interfaces (APIs). Although the developers described well the OpenSim installation on different operating systems (Windows, Mac, and Linux), it is time-consuming and complex since each operating system requires a different configuration. This project aims to demystify the development of neuro-musculoskeletal modeling in OpenSim with zero configuration on any operating system for installation (thus cross-platform), easy to share models while accessing free graphical processing units (GPUs) on a web-based platform of Google Colab. To achieve this, OpenColab was developed where OpenSim source code was used to build a Conda package that can be installed on the Google Colab with only one block of code in less than 7 min. To use OpenColab, one requires a connection to the internet and a Gmail account. Moreover, OpenColab accesses vast libraries of machine learning methods available within free Google products, e.g. TensorFlow. Next, we performed an inverse problem in biomechanics and compared OpenColab results with OpenSim graphical user interface (GUI) for validation. The outcomes of OpenColab and GUI matched well (rβ‰₯0.82). OpenColab takes advantage of the zero-configuration of cloud-based platforms, accesses GPUs, and enables users to share and reproduce modeling approaches for further validation, innovative online training, and research applications. Step-by-step installation processes and examples are available at: https://simtk.org/projects/opencolab.

  2. Apple Leaf Disease Detection Using Vision Transformer

    • zenodo.org
    text/x-python
    Updated Jun 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amreen Batool; Amreen Batool (2025). Apple Leaf Disease Detection Using Vision Transformer [Dataset]. http://doi.org/10.5281/zenodo.15702007
    Explore at:
    text/x-pythonAvailable download formats
    Dataset updated
    Jun 20, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Amreen Batool; Amreen Batool
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains a Python script for classifying apple leaf diseases using a Vision Transformer (ViT) model. The dataset used is the Plant Village dataset, which contains images of apple leaves with four classes: Healthy, Apple Scab, Black Rot, and Cedar Apple Rust. The script includes data preprocessing, model training, and evaluation steps.

    Table of Contents

    Introduction

    The goal of this project is to classify apple leaf diseases using a Vision Transformer (ViT) model. The dataset is divided into four classes: Healthy, Apple Scab, Black Rot, and Cedar Apple Rust. The script includes data preprocessing, model training, and evaluation steps.

    Code Explanation

    1. Importing Libraries

    • The script starts by importing necessary libraries such as matplotlib, seaborn, numpy, pandas, tensorflow, and sklearn. These libraries are used for data visualization, data manipulation, and building/training the deep learning model.

    2. Visualizing the Dataset

    • The walk_through_dir function is used to explore the dataset directory structure and count the number of images in each class.
    • The dataset is divided into Train, Val, and Test directories, each containing subdirectories for the four classes.

    3. Data Augmentation

    • The script uses ImageDataGenerator from Keras to apply data augmentation techniques such as rotation, horizontal flipping, and rescaling to the training data. This helps in improving the model's generalization ability.
    • Separate generators are created for training, validation, and test datasets.

    4. Patch Visualization

    • The script defines a Patches layer that extracts patches from the images. This is a crucial step in Vision Transformers, where images are divided into smaller patches that are then processed by the transformer.
    • The script visualizes these patches for different patch sizes (32x32, 16x16, 8x8) to understand how the image is divided.

    5. Model Training

    • The script defines a Vision Transformer (ViT) model using TensorFlow and Keras. The model is compiled with the Adam optimizer and categorical cross-entropy loss.
    • The model is trained for a specified number of epochs, and the training history is stored for later analysis.

    6. Model Evaluation

    • After training, the model is evaluated on the test dataset. The script generates a confusion matrix and a classification report to assess the model's performance.
    • The confusion matrix is visualized using seaborn to provide a clear understanding of the model's predictions.

    7. Visualizing Misclassified Images

    • The script includes functionality to visualize misclassified images, which helps in understanding where the model is making errors.

    8. Fine-Tuning and Learning Rate Adjustment

    • The script demonstrates how to fine-tune the model by adjusting the learning rate and re-training the model.

    Steps for Implementation

    1. Dataset Preparation

      • Ensure that the dataset is organized into Train, Val, and Test directories, with each directory containing subdirectories for each class (Healthy, Apple Scab, Black Rot, Cedar Apple Rust).
    2. Install Required Libraries

      • Install the necessary Python libraries using pip:
        pip install tensorflow matplotlib seaborn numpy pandas scikit-learn
    3. Run the Script

      • Execute the script in a Python environment. The script will automatically:
        • Load and preprocess the dataset.
        • Apply data augmentation.
        • Train the Vision Transformer model.
        • Evaluate the model and generate performance metrics.
    4. Analyze Results

      • Review the confusion matrix and classification report to understand the model's performance.
      • Visualize misclassified images to identify potential areas for improvement.
    5. Fine-Tuning

      • Experiment with different patch sizes, learning rates, and data augmentation techniques to improve the model's accuracy.
  3. e

    Image enhancement code: time-resolved tomograms of EICP application using 3D...

    • b2find.eudat.eu
    Updated Apr 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Image enhancement code: time-resolved tomograms of EICP application using 3D U-net - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/123f13fa-d8cf-5b7a-bb0b-e2c1b3799741
    Explore at:
    Dataset updated
    Apr 12, 2025
    Description

    This dataset contains the codes to reproduce the results of "Time resolved micro-XRCT dataset of Enzymatically Induced Calcite Precipitation (EICP) in sintered glass bead columns", cf. https://doi.org/10.18419/darus-2227. The code takes "low-dose" images as an input where the images contain many artifacts and noise as a trade-off of a fast data acquisition (6 min / dataset while 3 hours / dataset ("high-dose") in normal configuration). These low quality images are able to be improved with the help of a pre-trained model. The pre-trained model provided in here is trained with pairs of "high-dose" and "low-dose" data of above mentioned EICP application. The examples of used training, input and output data can be also found in this dataset. Although we showed only limited examples in here, we would like to emphasize that the used workflow and codes can be further extended to general image enhancement applications. The code requires a Python version above 3.7.7 with packages such as tensorflow, kears, pandas, scipy, scikit, numpy and patchify libraries. For further details of operation, please refer to the readme.txt file.

  4. h

    StackOverflow-TP4-1M

    • huggingface.co
    Updated Feb 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syed hasan (2024). StackOverflow-TP4-1M [Dataset]. https://huggingface.co/datasets/Syed-Hasan-8503/StackOverflow-TP4-1M
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 5, 2024
    Authors
    Syed hasan
    Description

    Dataset Details

      Dataset Description
    

    TP4 is a comprehensive dataset containing a curated collection of questions and answers from Stack Overflow. Focused on the realms of Python programming, NumPy, Pandas, TensorFlow, and PyTorch, TP4 includes essential attributes such as question ID, title, question body, answer body, associated tags, and score. This dataset is designed to facilitate research, analysis, and exploration of inquiries and solutions within the Python and… See the full description on the dataset page: https://huggingface.co/datasets/Syed-Hasan-8503/StackOverflow-TP4-1M.

  5. m

    Neural Networks in Friction Factor Analysis of Smooth Pipe Bends

    • data.mendeley.com
    Updated Dec 19, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adarsh Vasa (2022). Neural Networks in Friction Factor Analysis of Smooth Pipe Bends [Dataset]. http://doi.org/10.17632/sjvbwh5ckg.1
    Explore at:
    Dataset updated
    Dec 19, 2022
    Authors
    Adarsh Vasa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PROGRAM SUMMARY No. of lines in distributed program, including test data, etc.: 481 No. of bytes in distributed program, including test data, etc.: 14540.8 Distribution format: .py, .csv Programming language: Python Computer: Any workstation or laptop computer running TensorFlow, Google Colab, Anaconda, Jupyter, pandas, NumPy, Microsoft Azure and Alteryx. Operating system: Windows and Mac OS, Linux.

    Nature of problem: Navier-Stokes equations are solved numerically in ANSYS Fluent using Reynolds stress model for turbulence. The simulated values of friction factor are validated with theoretical and experimental data obtained from literature. Artificial neural networks are then used for a prediction-based augmentation of friction factor. The capabilities of the neural networks is discussed, in regard to computational cost and domain limitations.

    Solution method: The simulation data is obtained through Reynolds stress modelling of fluid flow through pipe. This data is augmented using the artificial neural network model that predicts within and without data domain.

    Restrictions: The code used in this research is limited to smooth pipe bends, in which friction factor is analysed using a steady state incompressible fluid flow.

    Runtime: The artificial neural network produces results within a span of 20 seconds for three-dimensional geometry, using the allocated free computational resources of Google Colaboratory cloud-based computing system.

  6. e

    Fracture network segmentation - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Jul 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Fracture network segmentation - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/41e75660-e65f-5685-9373-387a742bfcd2
    Explore at:
    Dataset updated
    Jul 24, 2025
    Description

    This dataset contains the codes to reproduce the five different segmentation results of the paper Lee et al (2021). The original dataset before applying these segmentation codes could be found in Ruf & Steeb (2020). The adopted segmentation methods in order to identify the micro fractures within the original dataset are the Local threshold, Sato, Chan-Vese, Random forest and U-net model. The Local threshold, Sato and U-net models are written in Python. The codes require a version above Python 3.7.7 with tensorflow, keras, pandas, scipy, scikit and numpy libraries. The workflow of the Chan-Vese method is interpreted in Matlab2018b. The result of the Random forest method could be reproduced with the uploaded trained model in an open source program ImageJ and trainableWeka library. For further details of operation, please refer to the readme.txt file.

  7. Data from: Prune the Bias From the Root: Bias Removal and Fairness...

    • zenodo.org
    zip
    Updated Jul 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qiaolin Qin; Qiaolin Qin (2025). Prune the Bias From the Root: Bias Removal and Fairness Estimation by Muting Sensitive Attributes in Pre-trained DNN Models [Dataset]. http://doi.org/10.5281/zenodo.15864927
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 11, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Qiaolin Qin; Qiaolin Qin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the replication package for the paper "Prune the Bias From the Root: Bias Removal and Fairness Estimation by Muting Sensitive Attributes in Pre-trained DNN Models".

    1. Introduction

    Attribute pruning is a simple yet effective post-processing technique that enforces individual fairness by zeroing out sensitive attribute weights in a pre-trained DNN’s input layer. To ensure the generalizability of our results, we conducted experiments on 32 models and 4 widely used datasets, and compared attribute pruning’s performance with 3 baseline post-processing methods (i.e., equalized odds, calibrated equalized odds, and ROC). In this study, we reveal the effectiveness of sensitive attribute pruning on small-scale DNN bias removal and discuss its usage in multi-attribute fairness estimation by answering the following research questions:

    RQ1: How does single-attribute pruning perform in comparison to the existing post-processing methods?
    By answering this research question, we aim to understand the accuracy and group fairness impact of single-attribute pruning on 32 models and compare them with 3 state-of-the-art post-processing methods.

    RQ2: How does multi-attribute pruning impact and aid understanding of the original models?
    By answering this research question, we investigate the accuracy impact of multi-attribute pruning on 24 models. Further, we investigate the prediction change brought by attribute pruning on different subgroups and discuss their implications on multi-attribute fairness estimation.

    2. Dependencies

    • Python >= 3.9
    • numpy == 1.24.4
    • fairlearn == 0.12.0
    • aif360 == 0.6.1
    • scikit-learn == 1.6.1
    • tensorflow == 2.14.0
    • pandas == 2.0.3
    • scipy == 1.13.1

    3. Dataset

    To comprehensively understand the impact of sensitive attribute pruning, we select four commonly used fairness datasets collected from different domains, namely Bank Marketing (BM), German Credit (GC), Adult Census (AC), and COMPAS. We select the four datasets because they provide a wide range of corresponding pre-trained models used in existing research. The introduction to the datasets is as follows:

    Bank Marketing (BM): The Bank Marketing dataset consists of marketing data from a Portuguese bank, containing 45,222 instances with 16 attributes, and the biased attribute identified is age. The objective is to classify whether a client will subscribe to a term deposit.

    German Credit (GC): The German Credit dataset includes 1,000 instances of individuals who have taken credit from a bank, each described by 20 attributes, with two sensitive attributes, sex and age; the single sensitive attribute to be evaluated in RQ1 is age, given that the subgroup positive rate difference (i.e., historical bias in the label) on this sensitive attribute is higher than sex. The task is to classify the credit risk of an individual.

    Adult Census (AC): The Adult Census dataset comprises United States census data from 41,188 individuals after empty entry removal, with 13 attributes. The sensitive attributes in the dataset are sex and race; the single sensitive attribute to be evaluated in RQ1 is sex. The goal is to predict whether an individual earns more than $50,000 per year.

    COMPAS: The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) dataset is collected from a system widely used for criminal recidivism risk prediction, containing 6,172 individuals and 6 attributes. The sensitive attributes in the dataset are race and age; to keep aligned with previous research, the single sensitive attribute to be evaluated in RQ1 is race. The goal is to predict whether an individual will reoffend in the future.

    4. Experiments

    To replicate the experiments, run the code in the src folder, the sub-folders contain the code for implementing the post-processing methods on each dataset. To obtain the basic results, run all the codes in each folder. The results will be stored in the results folder; we also provide the code for statistical analyses (i.e., paired t-test) under this folder. To conduct the statistical analyses, run statistic_test.py and check the results in single_att_ttest.json.

    RQ1: How does single-attribute pruning perform in comparison to the existing post-processing methods?
    While ensuring individual fairness on the single attribute, attribute pruning will not significantly impact accuracy. It preserved the highest post-processing accuracy among the four methods on 23 out of 32 models. It can also improve the two group accuracies in general, but its improvements are insignificant and not always optimal in comparison to the other three methods. Further, given the theoretical difference between individual fairness and group fairness, attribute pruning may even harm group fairness when the observed dataset is not comprehensive enough to cover the whole data space.

    RQ2: How does multi-attribute pruning impact and aid understanding of the original models?
    According to our experiment on 24 models, multi-attribute pruning can also retain a certain level of accuracy while enhancing individual fairness. It can also be used to estimate multi-attribute group fairness in models with similar original accuracy based on the TPR difference before and after pruning the sensitive attributes.

    5. Folder Structure

    β”œβ”€β”€ data # The 4 datasets used in the study
    β”œβ”€β”€ models # Model files for the 32 models included in our experiment
    β”œβ”€β”€ results # Results for RQ1 and RQ2
    β”œβ”€β”€ AC
    β”œβ”€β”€ BM
    β”œβ”€β”€ GC
    β”œβ”€β”€ compas
    β”œβ”€β”€ single_att_ttest.json # Statistical analysis results
    └── statistic_test.py
    β”œβ”€β”€ src # Codes for implementing the post-processing methods on each dataset
    β”œβ”€β”€ AC
    β”œβ”€β”€ BM
    β”œβ”€β”€ GC
    └── compas
    β”œβ”€β”€ utils
    β”œβ”€β”€ tables
    └── README.md

  8. API Database of Python frameworks & Labeled Issues

    • zenodo.org
    • explore.openaire.eu
    bin
    Updated Jul 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous Authors; Anonymous Authors (2024). API Database of Python frameworks & Labeled Issues [Dataset]. http://doi.org/10.5281/zenodo.3518685
    Explore at:
    binAvailable download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous Authors; Anonymous Authors
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PyLibAPIs.7z : contains public API data (MongoDB dump) for these frameworks:

    • TensorFlow
    • Keras
    • Scikit-learn
    • Pandas
    • Flask
    • Django

    Label.xlsx: contains issues and their labels

    Breaking Changes for All Frameworks.pdf: contains the breaking change distributions of all six frameworks

  9. Z

    API Database of Python frameworks & Labeled Issues

    • data.niaid.nih.gov
    Updated Aug 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous Authors (2021). API Database of Python frameworks & Labeled Issues [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_2756358
    Explore at:
    Dataset updated
    Aug 4, 2021
    Dataset authored and provided by
    Anonymous Authors
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PyLibAPIs.7z : contains public API data (mongodb dump) for these frameworks:

    TensorFlow

    Keras

    scikit-learn

    Pandas

    Flask

    Django

    Label.xlsx: cintains issues and their labels

  10. GitHub Commit Messages Dataset

    • kaggle.com
    Updated Apr 21, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dhruvil Dave (2021). GitHub Commit Messages Dataset [Dataset]. http://doi.org/10.34740/kaggle/dsv/2143532
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 21, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Dhruvil Dave
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    https://github.githubassets.com/images/modules/site/home/footer-illustration.svg" alt="GitHub">

    Image credits: https://github.com

    Introduction

    This is a dataset that contains all commit messages and its related metadata from 34 popular GitHub repositories. These repositories are:

    • tensorflow/tensorflow
    • pytorch/pytorch
    • torvalds/linux
    • python/cpython
    • rust-lang/rust
    • microsoft/TypeScript
    • microsoft/vscode
    • golang/go
    • numpy/numpy
    • scikit-learn/scikit-learn
    • openbsd/src
    • freebsd/freebsd-src
    • pandas-dev/pandas
    • scipy/scipy
    • tidyverse/ggplot2
    • kubernetes/kubernetes
    • postgres/postgres
    • nodejs/node
    • facebook/react
    • angular/angular
    • matplotlib/matplotlib
    • apache/httpd
    • nginx/nginx
    • opencv/opencv
    • ipython/ipython
    • rstudio/rstudio
    • jupyterlab/jupyterlab
    • gcc-mirror/gcc
    • apple/swift
    • denoland/deno
    • apache/spark
    • llvm/llvm-project
    • chromium/chromium
    • v8/v8

    Data as of Wed Apr 21 03:42:44 PM IST 2021

    Credits

    Image credits: Unsplash - plhnk

  11. Data from: Informative neural representations of unseen contents during...

    • openneuro.org
    Updated Dec 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ning Mei; Roberto Santana; David Soto (2021). Informative neural representations of unseen contents during higher-order processing in human brains and deep artificial networks [Dataset]. http://doi.org/10.18112/openneuro.ds003927.v1.0.1
    Explore at:
    Dataset updated
    Dec 10, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Ning Mei; Roberto Santana; David Soto
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This fMRI dataset was collected for the study "Informative neural representations of unseen contents during higher-order processing in human brains and deep artificial networks".

    Code corresponding to the dataste: https://github.com/nmningmei/unconfeats

    System Information

    • Platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-centos-7.3.1611-Core
    • CPU: x86_64: 16 cores

    Python environment

    • Python: 3.6.3 |Anaconda, Inc.| (default, Nov 20 2017, 20:41:42) [GCC 7.2.0]
    • Numpy: 1.19.1
    • Scipy: 1.3.1
    • Matplotlib: 3.1.3
    • Scikit-learn: 0.24.2
    • Seaborn: 0.11.1
    • Pandas: 1.0.1
    • Tensorflow: 2.0.0
    • Pytorch: 1.7.1
    • Nilearn: 0.7.1
    • Nipype: 1.4.2
    • LegrandNico/metadPy ## R environment - R base
    • R: 4.0.3 # for 3-way repeated measure ANOVAs ## Brain image processing backends
    • mricrogl
    • mricron: 10.2014
    • FSL: 6.0.0
    • Freesurfer: 6.0.0
  12. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour (2023). OpenColab project: OpenSim in Google colaboratory to explore biomechanics on the web [Dataset]. http://doi.org/10.6084/m9.figshare.20440340.v1

Data from: OpenColab project: OpenSim in Google colaboratory to explore biomechanics on the web

Related Article
Explore at:
docxAvailable download formats
Dataset updated
Jul 6, 2023
Dataset provided by
Taylor & Francis
Authors
Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

OpenSim is an open-source biomechanical package with a variety of applications. It is available for many users with bindings in MATLAB, Python, and Java via its application programming interfaces (APIs). Although the developers described well the OpenSim installation on different operating systems (Windows, Mac, and Linux), it is time-consuming and complex since each operating system requires a different configuration. This project aims to demystify the development of neuro-musculoskeletal modeling in OpenSim with zero configuration on any operating system for installation (thus cross-platform), easy to share models while accessing free graphical processing units (GPUs) on a web-based platform of Google Colab. To achieve this, OpenColab was developed where OpenSim source code was used to build a Conda package that can be installed on the Google Colab with only one block of code in less than 7 min. To use OpenColab, one requires a connection to the internet and a Gmail account. Moreover, OpenColab accesses vast libraries of machine learning methods available within free Google products, e.g. TensorFlow. Next, we performed an inverse problem in biomechanics and compared OpenColab results with OpenSim graphical user interface (GUI) for validation. The outcomes of OpenColab and GUI matched well (rβ‰₯0.82). OpenColab takes advantage of the zero-configuration of cloud-based platforms, accesses GPUs, and enables users to share and reproduce modeling approaches for further validation, innovative online training, and research applications. Step-by-step installation processes and examples are available at: https://simtk.org/projects/opencolab.

Search
Clear search
Close search
Google apps
Main menu