6 datasets found
  1. f

    Data from: OpenColab project: OpenSim in Google colaboratory to explore...

    • tandf.figshare.com
    docx
    Updated Jul 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour (2023). OpenColab project: OpenSim in Google colaboratory to explore biomechanics on the web [Dataset]. http://doi.org/10.6084/m9.figshare.20440340.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jul 6, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    OpenSim is an open-source biomechanical package with a variety of applications. It is available for many users with bindings in MATLAB, Python, and Java via its application programming interfaces (APIs). Although the developers described well the OpenSim installation on different operating systems (Windows, Mac, and Linux), it is time-consuming and complex since each operating system requires a different configuration. This project aims to demystify the development of neuro-musculoskeletal modeling in OpenSim with zero configuration on any operating system for installation (thus cross-platform), easy to share models while accessing free graphical processing units (GPUs) on a web-based platform of Google Colab. To achieve this, OpenColab was developed where OpenSim source code was used to build a Conda package that can be installed on the Google Colab with only one block of code in less than 7 min. To use OpenColab, one requires a connection to the internet and a Gmail account. Moreover, OpenColab accesses vast libraries of machine learning methods available within free Google products, e.g. TensorFlow. Next, we performed an inverse problem in biomechanics and compared OpenColab results with OpenSim graphical user interface (GUI) for validation. The outcomes of OpenColab and GUI matched well (r≥0.82). OpenColab takes advantage of the zero-configuration of cloud-based platforms, accesses GPUs, and enables users to share and reproduce modeling approaches for further validation, innovative online training, and research applications. Step-by-step installation processes and examples are available at: https://simtk.org/projects/opencolab.

  2. T

    imagenet2012_subset

    • tensorflow.org
    Updated Oct 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet2012_subset [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet2012_subset
    Explore at:
    Dataset updated
    Oct 21, 2024
    Description

    ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.

    The test split contains 100K images but no labels because no labels have been publicly released. We provide support for the test split from 2012 with the minor patch released on October 10, 2019. In order to manually download this data, a user must perform the following operations:

    1. Download the 2012 test split available here.
    2. Download the October 10, 2019 patch. There is a Google Drive link to the patch provided on the same page.
    3. Combine the two tar-balls, manually overwriting any images in the original archive with images from the patch. According to the instructions on image-net.org, this procedure overwrites just a few images.

    The resulting tar-ball may then be processed by TFDS.

    To assess the accuracy of a model on the ImageNet test split, one must run inference on all images in the split, export those results to a text file that must be uploaded to the ImageNet evaluation server. The maintainers of the ImageNet evaluation server permits a single user to submit up to 2 submissions per week in order to prevent overfitting.

    To evaluate the accuracy on the test split, one must first create an account at image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:

    771 778 794 387 650
    363 691 764 923 427
    737 369 430 531 124
    755 930 755 59 168
    

    The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See labels.txt.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet2012_subset', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012_subset-1pct-5.0.0.png" alt="Visualization" width="500px">

  3. Synthetic Speech Commands Dataset

    • kaggle.com
    Updated Jun 12, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    JohannesBuchner (2018). Synthetic Speech Commands Dataset [Dataset]. https://www.kaggle.com/jbuchner/synthetic-speech-commands-dataset/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 12, 2018
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    JohannesBuchner
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Context

    • We would like to have good open source speech recognition
    • Commercial companies try to solve a hard problem: map arbitrary, open-ended speech to text and identify meaning
    • The easier problem should be: detect a predefined sequence of sounds and map it to a predefined action.
    • Lets tackle the simplest problem first: Classifying single, short words (commands)
    • Audio training data is difficult to obtain.

    Approaches

    • The parent project (spoken verbs) created synthetic speech datasets using text-to-speech programs. The focus there is on single-syllable verbs (commands).
    • The Speech Commands dataset (by Pete Warden, see the TensorFlow Speech Recognition Challenge) asked volunteers to pronounce a small set of words: (yes, no, up, down, left, right, on, off, stop, go, and 0-9).
    • This data set provides synthetic counterparts to this real world dataset.

    Open questions

    One can use these two datasets in various ways. Here are some things I am interested in seeing answered:

    1. What is it in an audio sample that makes it "sound similar"? Our ears can easily classify both synthetic and real speech, but for algorithms this is still hard. Extending the real dataset with the synthetic data yields a larger training sample and more diversity.
    2. How well does an algorithm trained on one data set perform on the other? (transfer learning) If it works poorly, the algorithm probably has not found the key to audio similarity.
    3. Are synthetic data sufficient for classifying real datasets? If this is the case, the implications are huge. You would not need to ask thousands of volunteers for hours of time. Instead, you could easily create arbitrary synthetic datasets for your target words.

    A interesting challenge (idea for competition) would be to train on this data set and evaluate on the real dataset.

    Synthetic data creation

    Here I describe how the synthetic audio samples were created. Code is available at https://github.com/JohannesBuchner/spoken-command-recognition, in the "tensorflow-speech-words" folder.

    1. The list of words is in "inputwords". "marvin" was changed to "marvel", because "marvin" does not have a pronounciation coding yet.
    2. Pronounciations were taken from the British English Example Pronciation dictionary (BEEP, http://svr-www.eng.cam.ac.uk/comp.speech/Section1/Lexical/beep.html ). The phonemes were translated for the next step with a translation table (see compile.py for details). This creates the file "words". There are multiple pronounciations and stresses for each word.
    3. A text-to-speech program (espeak) was used to pronounce these words (see generatetfspeech.sh for details). The pronounciation, stress, pitch, speed and speaker were varied. This gives >1000 clean examples for each word.
    4. Noise samples were obtained. Noise samples (airport babble car exhibition restaurant street subway train) come from AURORA (https://www.ee.columbia.edu/~dpwe/sounds/noise/), and additional noise samples were synthetically created (ocean white brown pink). (see ../generatenoise.sh for details)
    5. Noise and speech were mixed. The speech volume and offset were varied. The noise source, volume was also varied. See addnoise.py for details. addnoise2.py is the same, but with lower speech volume and higher noise volume. All audio files are one second (1s) long and are in wav format (16 bit, mono, 16000 Hz).
    6. Finally, the data was compressed into an archive and uploaded to kaggle.

    Acknowledgements

    This work built upon

    Please provide appropriate citations to the above when using this work.

    To cite the resulting dataset, you can use:

    APA-style citation: "Buchner J. Synthetic Speech Commands: A public dataset for single-word speech recognition, 2017. Available from https://www.kaggle.com/jbuchner/synthetic-speech-commands-dataset/".

    BibTeX @article{speechcommands, title={Synthetic Speech Commands: A public dataset for single-word speech recognition.}, author={Buchner, Johannes}, journal={Dataset available from https://www.kaggle.com/jbuchner/synthetic-speech-commands-dataset/}, year={2017} }

    Thanks to everyone trying to improve open source voice detection and speech recognition.

    Links

  4. d

    Data from: A hands-on guide to use network video recorders, internet...

    • search.dataone.org
    • data.niaid.nih.gov
    Updated Jul 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Konrad Karlsson (2025). A hands-on guide to use network video recorders, internet protocol cameras, and deep learning models for dynamic monitoring of trout and salmon in small streams [Dataset]. http://doi.org/10.5061/dryad.v6wwpzh3g
    Explore at:
    Dataset updated
    Jul 29, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Konrad Karlsson
    Description

    This study outlines a method for using surveillance cameras and an algorithm that calls a deep learning model to generate video segments featuring salmon and trout in small streams. This automated process greatly reduces the need for human intervention in video surveillance. Further, a comprehensive guide is provided on setting up and configuring surveillance equipment, along with instructions on training a deep learning model tailored to specific requirements. Access to video data and knowledge about deep learning models makes monitoring of trout and salmon dynamic and hands-on, as the collected data can be used to train and further improve deep learning models. Hopefully, this setup will encourage fisheries managers to conduct more monitoring as the equipment is relatively cheap compared to customized solutions for fish monitoring. To make effective use of the data, natural markings of the camera captured fish can be used for individual identification. While the automated process grea..., Please refer to the article and the README file with the deposited data., , # A hands-on guide to use network video recorders, internet protocol cameras, and deep learning models for dynamic monitoring of trout and salmon in small streams

    https://doi.org/10.5061/dryad.v6wwpzh3g

    Konrad Karlsson

    Department of Aquatic Resources, Institute of Freshwater Research, Swedish University of Agricultural Sciences,
    Stångholmsvägen 2, 178 93 Drottningholm, Sweden

    ##

    Below is a brief description of the .py and .R scripts, what the scripts do and the folder(s) they relate to. You will have to set the directory in the scripts in order to run them. There is a word file provided to make it easier to get Python, TensorFlow and ffmpeg installed on Windows 10:
    "Install and run TensorFlow and ffmpeg on Windows 10.docx".

    Important note:

    The Python scripts are an essential part of the study. The two R scripts, "train model in R.R" and "split video files in R.R", are similar to what is included in the Python scripts and may be ...

  5. T

    wmt14_translate

    • tensorflow.org
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wmt14_translate [Dataset]. https://www.tensorflow.org/datasets/catalog/wmt14_translate
    Explore at:
    Description

    Translate dataset based on the data from statmt.org.

    Versions exists for the different years using a combination of multiple data sources. The base wmt_translate allows you to create your own config to choose your own data/language pair by creating a custom tfds.translate.wmt.WmtConfig.

    config = tfds.translate.wmt.WmtConfig(
      version="0.0.1",
      language_pair=("fr", "de"),
      subsets={
        tfds.Split.TRAIN: ["commoncrawl_frde"],
        tfds.Split.VALIDATION: ["euelections_dev2019"],
      },
    )
    builder = tfds.builder("wmt_translate", config=config)
    

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('wmt14_translate', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

  6. T

    diabetic_retinopathy_detection

    • tensorflow.org
    Updated Feb 20, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). diabetic_retinopathy_detection [Dataset]. https://www.tensorflow.org/datasets/catalog/diabetic_retinopathy_detection
    Explore at:
    Dataset updated
    Feb 20, 2020
    Description

    A large set of high-resolution retina images taken under a variety of imaging conditions.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('diabetic_retinopathy_detection', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/diabetic_retinopathy_detection-original-3.0.0.png" alt="Visualization" width="500px">

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour (2023). OpenColab project: OpenSim in Google colaboratory to explore biomechanics on the web [Dataset]. http://doi.org/10.6084/m9.figshare.20440340.v1

Data from: OpenColab project: OpenSim in Google colaboratory to explore biomechanics on the web

Related Article
Explore at:
docxAvailable download formats
Dataset updated
Jul 6, 2023
Dataset provided by
Taylor & Francis
Authors
Hossein Mokhtarzadeh; Fangwei Jiang; Shengzhe Zhao; Fatemeh Malekipour
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

OpenSim is an open-source biomechanical package with a variety of applications. It is available for many users with bindings in MATLAB, Python, and Java via its application programming interfaces (APIs). Although the developers described well the OpenSim installation on different operating systems (Windows, Mac, and Linux), it is time-consuming and complex since each operating system requires a different configuration. This project aims to demystify the development of neuro-musculoskeletal modeling in OpenSim with zero configuration on any operating system for installation (thus cross-platform), easy to share models while accessing free graphical processing units (GPUs) on a web-based platform of Google Colab. To achieve this, OpenColab was developed where OpenSim source code was used to build a Conda package that can be installed on the Google Colab with only one block of code in less than 7 min. To use OpenColab, one requires a connection to the internet and a Gmail account. Moreover, OpenColab accesses vast libraries of machine learning methods available within free Google products, e.g. TensorFlow. Next, we performed an inverse problem in biomechanics and compared OpenColab results with OpenSim graphical user interface (GUI) for validation. The outcomes of OpenColab and GUI matched well (r≥0.82). OpenColab takes advantage of the zero-configuration of cloud-based platforms, accesses GPUs, and enables users to share and reproduce modeling approaches for further validation, innovative online training, and research applications. Step-by-step installation processes and examples are available at: https://simtk.org/projects/opencolab.

Search
Clear search
Close search
Google apps
Main menu