2 datasets found
  1. f

    Table_2_A Novel Computational Framework for Precision Diagnosis and Subtype...

    • figshare.com
    xlsx
    Updated Jun 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fei Xia; Xiaojun Xie; Zongqin Wang; Shichao Jin; Ke Yan; Zhiwei Ji (2023). Table_2_A Novel Computational Framework for Precision Diagnosis and Subtype Discovery of Plant With Lesion.XLSX [Dataset]. http://doi.org/10.3389/fpls.2021.789630.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 15, 2023
    Dataset provided by
    Frontiers
    Authors
    Fei Xia; Xiaojun Xie; Zongqin Wang; Shichao Jin; Ke Yan; Zhiwei Ji
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Plants are often attacked by various pathogens during their growth, which may cause environmental pollution, food shortages, or economic losses in a certain area. Integration of high throughput phenomics data and computer vision (CV) provides a great opportunity to realize plant disease diagnosis in the early stage and uncover the subtype or stage patterns in the disease progression. In this study, we proposed a novel computational framework for plant disease identification and subtype discovery through a deep-embedding image-clustering strategy, Weighted Distance Metric and the t-stochastic neighbor embedding algorithm (WDM-tSNE). To verify the effectiveness, we applied our method on four public datasets of images. The results demonstrated that the newly developed tool is capable of identifying the plant disease and further uncover the underlying subtypes associated with pathogenic resistance. In summary, the current framework provides great clustering performance for the root or leave images of diseased plants with pronounced disease spots or symptoms.

  2. Inertia Sensors for Human Activity Recognition

    • kaggle.com
    Updated Jun 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Owen Agius (2021). Inertia Sensors for Human Activity Recognition [Dataset]. https://www.kaggle.com/datasets/owenagius/inertia-sensors-for-human-activity-recognition
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 30, 2021
    Dataset provided by
    Kaggle
    Authors
    Owen Agius
    License

    https://ec.europa.eu/info/legal-notice_enhttps://ec.europa.eu/info/legal-notice_en

    Description

    The main scope of this project is to implement a Human Activity Recognition System (HAR System). The purpose of this system is to identify the action which the user would be doing, solely based off of the changes in motion of the user’s body during the performance of specific actions.

    Most HAR systems implemented would use specialised motion sensors that would be secured to the users’ body, including, but not limited to, the waist, chest, arms and legs. However, the main problem with this type of system is the complex setup the user would be required to wear during the activity, in addition to the added expenses when purchasing these sensors. Considering the simplicity of the application, many users are more likely to get discouraged in using such a complex, albeit excessive, set up. As a result of the rapid advancements in the technological field as well as the efforts of many researchers, this setup has been reduced to needing only a smartphone. This initial set-up made use of a bulkier mounting system through the use of a belt, an aspect of the set-up which can be improved upon, with the users’ comfort being the main priority.

    Therefore, for our project, we were aiming to develop a simple Human Activity Recognition prototype which only uses the built-in sensors found in an average smartphone and eliminating the use of a belt mount, allowing the user to carry their phone in their pockets. While this may result in less accurate predictions, it allows users to retain their usual habits; keeping their phone in their pockets.

    Moreover, two separate datasets were gathered by the three members working on this APT. The first dataset was done to mimic that made in the paper [1] with six total actions: Walking, Walking Downstairs, Walking Upstairs, Sitting, Standing, and Laying. This dataset was created in order to compare the difference in results gathered and processed by Anguita et al. and ourselves. However, we also collected a second dataset in which we chose physical activities which were not included in the existing data. These also required body movement from the user and were recorded through the accelerometer and gyroscope sensors found in the smartphone. The final new activities are Cycling, Football, Swimming, Tennis, Jump Rope and Push-ups. In summary, the main aim of this project was not only to interpret the original six activities that most Human Activity Recognition papers tend to focus on, but also to recognise another six unique physical activities.

    The process of classifying the data with a high accuracy can be divided into two steps: data collection and modelling. In order to collect a sufficient amount of data, the free app ‘AndroSensor’ was used. Using this app allowed for the collection of data using the four main inertia sensors: gyroscope, gravity, accelerometer and linear acceleration. The data collected consists of roughly one hour worth of data for each of the 12 activities mentioned above, allowing for the model to be developed with an even distribution of data across all the categories.

    After the data is collected, it is pre-processed. The pre-processing entails the removal of “NaN” and duplicated values while also generating statistical readings from the ‘csv’ file produced by ‘AndroSensor’. After being processed, the data is analysed via a t-SNE algorithm which aids the visualisation of data clusters. Finally, the data is modelled and classified using four different supervised machine learning algorithms: Logistic Regression, Support Vector Machines, Decision Trees and K-Nearest Neighbours.

    Do not hestitate to contact me on owenagius24@gmail.com if you wish to see the whole report.

  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Fei Xia; Xiaojun Xie; Zongqin Wang; Shichao Jin; Ke Yan; Zhiwei Ji (2023). Table_2_A Novel Computational Framework for Precision Diagnosis and Subtype Discovery of Plant With Lesion.XLSX [Dataset]. http://doi.org/10.3389/fpls.2021.789630.s003

Table_2_A Novel Computational Framework for Precision Diagnosis and Subtype Discovery of Plant With Lesion.XLSX

Related Article
Explore at:
xlsxAvailable download formats
Dataset updated
Jun 15, 2023
Dataset provided by
Frontiers
Authors
Fei Xia; Xiaojun Xie; Zongqin Wang; Shichao Jin; Ke Yan; Zhiwei Ji
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Plants are often attacked by various pathogens during their growth, which may cause environmental pollution, food shortages, or economic losses in a certain area. Integration of high throughput phenomics data and computer vision (CV) provides a great opportunity to realize plant disease diagnosis in the early stage and uncover the subtype or stage patterns in the disease progression. In this study, we proposed a novel computational framework for plant disease identification and subtype discovery through a deep-embedding image-clustering strategy, Weighted Distance Metric and the t-stochastic neighbor embedding algorithm (WDM-tSNE). To verify the effectiveness, we applied our method on four public datasets of images. The results demonstrated that the newly developed tool is capable of identifying the plant disease and further uncover the underlying subtypes associated with pathogenic resistance. In summary, the current framework provides great clustering performance for the root or leave images of diseased plants with pronounced disease spots or symptoms.

Search
Clear search
Close search
Google apps
Main menu