9 datasets found
  1. f

    Clipping of cough sound clip.

    • plos.figshare.com
    xls
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu (2024). Clipping of cough sound clip. [Dataset]. http://doi.org/10.1371/journal.pone.0302651.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Since the COVID-19, cough sounds have been widely used for screening purposes. Intelligent analysis techniques have proven to be effective in detecting respiratory diseases. In 2021, there were up to 10 million TB-infected patients worldwide, with an annual growth rate of 4.5%. Most of the patients were from economically underdeveloped regions and countries. The PPD test, a common screening method in the community, has a sensitivity of as low as 77%. Although IGRA and Xpert MTB/RIF offer high specificity and sensitivity, their cost makes them less accessible. In this study, we proposed a feature fusion model-based cough sound classification method for primary TB screening in communities. Data were collected from hospitals using smart phones, including 230 cough sounds from 70 patients with TB and 226 cough sounds from 74 healthy subjects. We employed Bi-LSTM and Bi-GRU recurrent neural networks to analyze five traditional feature sets including the Mel frequency cepstrum coefficient (MFCC), zero-crossing rate (ZCR), short-time energy, root mean square, and chroma_cens. The incorporation of features extracted from the speech spectrogram by 2D convolution training into the Bi-LSTM model enhanced the classification results. With traditional futures, the best TB patient detection result was achieved with the Bi-LSTM model, with 93.99% accuracy, 93.93% specificity, and 92.39% sensitivity. When combined with a speech spectrogram, the classification results showed 96.33% accuracy, 94.99% specificity, and 98.13% sensitivity. Our findings underscore that traditional features and deep features have good complementarity when fused using Bi LSTM modelling, which outperforms existing PPD detection methods in terms of both efficiency and accuracy.

  2. f

    Results comparison of proposed model P (1) to P (4) over validation set of 5...

    • figshare.com
    xls
    Updated Mar 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hassaan Malik; Tayyaba Anees (2024). Results comparison of proposed model P (1) to P (4) over validation set of 5 runs. [Dataset]. http://doi.org/10.1371/journal.pone.0296352.t009
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 12, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Hassaan Malik; Tayyaba Anees
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results comparison of proposed model P (1) to P (4) over validation set of 5 runs.

  3. f

    RNN model parameters.

    • plos.figshare.com
    xls
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu (2024). RNN model parameters. [Dataset]. http://doi.org/10.1371/journal.pone.0302651.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Since the COVID-19, cough sounds have been widely used for screening purposes. Intelligent analysis techniques have proven to be effective in detecting respiratory diseases. In 2021, there were up to 10 million TB-infected patients worldwide, with an annual growth rate of 4.5%. Most of the patients were from economically underdeveloped regions and countries. The PPD test, a common screening method in the community, has a sensitivity of as low as 77%. Although IGRA and Xpert MTB/RIF offer high specificity and sensitivity, their cost makes them less accessible. In this study, we proposed a feature fusion model-based cough sound classification method for primary TB screening in communities. Data were collected from hospitals using smart phones, including 230 cough sounds from 70 patients with TB and 226 cough sounds from 74 healthy subjects. We employed Bi-LSTM and Bi-GRU recurrent neural networks to analyze five traditional feature sets including the Mel frequency cepstrum coefficient (MFCC), zero-crossing rate (ZCR), short-time energy, root mean square, and chroma_cens. The incorporation of features extracted from the speech spectrogram by 2D convolution training into the Bi-LSTM model enhanced the classification results. With traditional futures, the best TB patient detection result was achieved with the Bi-LSTM model, with 93.99% accuracy, 93.93% specificity, and 92.39% sensitivity. When combined with a speech spectrogram, the classification results showed 96.33% accuracy, 94.99% specificity, and 98.13% sensitivity. Our findings underscore that traditional features and deep features have good complementarity when fused using Bi LSTM modelling, which outperforms existing PPD detection methods in terms of both efficiency and accuracy.

  4. v

    US AI Cough Monitoring Market Size By Product Type (Hardware Devices,...

    • verifiedmarketresearch.com
    Updated Apr 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2025). US AI Cough Monitoring Market Size By Product Type (Hardware Devices, Software Solutions), By Application (Chronic Disease Management, Acute Illness Detection), By End-user (Hospitals & Clinics, Home Care), By Geographic Scope And Forecast [Dataset]. https://www.verifiedmarketresearch.com/product/us-ai-cough-monitoring-market/
    Explore at:
    Dataset updated
    Apr 16, 2025
    Dataset authored and provided by
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Description

    US AI Cough Monitoring Market size was valued to be USD 6 Billion in the year 2024, and it is expected to reach USD 10.70 Billion in 2032, at a CAGR of 7.5% over the forecast period of 2026 to 2032.

    US AI Cough Monitoring Market Drivers

    Increasing Prevalence of Respiratory Diseases: The rising incidence of chronic respiratory conditions like asthma, COPD, and cystic fibrosis, as well as infectious diseases such as influenza and tuberculosis, creates a substantial need for effective monitoring tools. AI-powered cough analysis can aid in early detection, diagnosis, and management of these conditions.

    Growing Preference for Remote Patient Monitoring and Home Healthcare: The shift towards value-based care and the increasing desire for patient convenience are driving the adoption of remote monitoring solutions. AI-powered cough monitoring enables continuous, non-obtrusive data collection in home settings, reducing the need for frequent hospital visits.

    Advancements in Artificial Intelligence and Machine Learning: Significant progress in AI and ML algorithms allows for increasingly accurate analysis of cough sounds. These algorithms can differentiate between various types of coughs and identify subtle acoustic biomarkers indicative of specific respiratory conditions.

    Development of Sophisticated Sensors and Wearable Devices: The proliferation of smartphones, smartwatches, and dedicated respiratory sensors equipped with microphones provides a platform for continuous cough data acquisition. AI algorithms can be integrated into these devices for real-time analysis.

  5. f

    Result of the feature selection experiment.

    • plos.figshare.com
    xls
    Updated May 14, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu (2024). Result of the feature selection experiment. [Dataset]. http://doi.org/10.1371/journal.pone.0302651.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Since the COVID-19, cough sounds have been widely used for screening purposes. Intelligent analysis techniques have proven to be effective in detecting respiratory diseases. In 2021, there were up to 10 million TB-infected patients worldwide, with an annual growth rate of 4.5%. Most of the patients were from economically underdeveloped regions and countries. The PPD test, a common screening method in the community, has a sensitivity of as low as 77%. Although IGRA and Xpert MTB/RIF offer high specificity and sensitivity, their cost makes them less accessible. In this study, we proposed a feature fusion model-based cough sound classification method for primary TB screening in communities. Data were collected from hospitals using smart phones, including 230 cough sounds from 70 patients with TB and 226 cough sounds from 74 healthy subjects. We employed Bi-LSTM and Bi-GRU recurrent neural networks to analyze five traditional feature sets including the Mel frequency cepstrum coefficient (MFCC), zero-crossing rate (ZCR), short-time energy, root mean square, and chroma_cens. The incorporation of features extracted from the speech spectrogram by 2D convolution training into the Bi-LSTM model enhanced the classification results. With traditional futures, the best TB patient detection result was achieved with the Bi-LSTM model, with 93.99% accuracy, 93.93% specificity, and 92.39% sensitivity. When combined with a speech spectrogram, the classification results showed 96.33% accuracy, 94.99% specificity, and 98.13% sensitivity. Our findings underscore that traditional features and deep features have good complementarity when fused using Bi LSTM modelling, which outperforms existing PPD detection methods in terms of both efficiency and accuracy.

  6. f

    Results comparison of the proposed model with other baseline models.

    • plos.figshare.com
    xls
    Updated Mar 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hassaan Malik; Tayyaba Anees (2024). Results comparison of the proposed model with other baseline models. [Dataset]. http://doi.org/10.1371/journal.pone.0296352.t010
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 12, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Hassaan Malik; Tayyaba Anees
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results comparison of the proposed model with other baseline models.

  7. f

    Classification results of audio features by 4 models.

    • plos.figshare.com
    xls
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu (2024). Classification results of audio features by 4 models. [Dataset]. http://doi.org/10.1371/journal.pone.0302651.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Classification results of audio features by 4 models.

  8. Comparison of classification results after adding the Spectrogram.

    • plos.figshare.com
    xls
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu (2024). Comparison of classification results after adding the Spectrogram. [Dataset]. http://doi.org/10.1371/journal.pone.0302651.t008
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of classification results after adding the Spectrogram.

  9. f

    CXR, CT scan, and CSI size and storage at each preprocessing step.

    • plos.figshare.com
    xls
    Updated Mar 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hassaan Malik; Tayyaba Anees (2024). CXR, CT scan, and CSI size and storage at each preprocessing step. [Dataset]. http://doi.org/10.1371/journal.pone.0296352.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 12, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Hassaan Malik; Tayyaba Anees
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CXR, CT scan, and CSI size and storage at each preprocessing step.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu (2024). Clipping of cough sound clip. [Dataset]. http://doi.org/10.1371/journal.pone.0302651.t002

Clipping of cough sound clip.

Related Article
Explore at:
xlsAvailable download formats
Dataset updated
May 14, 2024
Dataset provided by
PLOS ONE
Authors
Wenlong Xu; Xiaofan Bao; Xiaomin Lou; Xiaofang Liu; Yuanyuan Chen; Xiaoqiang Zhao; Chenlu Zhang; Chen Pan; Wenlong Liu; Feng Liu
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Since the COVID-19, cough sounds have been widely used for screening purposes. Intelligent analysis techniques have proven to be effective in detecting respiratory diseases. In 2021, there were up to 10 million TB-infected patients worldwide, with an annual growth rate of 4.5%. Most of the patients were from economically underdeveloped regions and countries. The PPD test, a common screening method in the community, has a sensitivity of as low as 77%. Although IGRA and Xpert MTB/RIF offer high specificity and sensitivity, their cost makes them less accessible. In this study, we proposed a feature fusion model-based cough sound classification method for primary TB screening in communities. Data were collected from hospitals using smart phones, including 230 cough sounds from 70 patients with TB and 226 cough sounds from 74 healthy subjects. We employed Bi-LSTM and Bi-GRU recurrent neural networks to analyze five traditional feature sets including the Mel frequency cepstrum coefficient (MFCC), zero-crossing rate (ZCR), short-time energy, root mean square, and chroma_cens. The incorporation of features extracted from the speech spectrogram by 2D convolution training into the Bi-LSTM model enhanced the classification results. With traditional futures, the best TB patient detection result was achieved with the Bi-LSTM model, with 93.99% accuracy, 93.93% specificity, and 92.39% sensitivity. When combined with a speech spectrogram, the classification results showed 96.33% accuracy, 94.99% specificity, and 98.13% sensitivity. Our findings underscore that traditional features and deep features have good complementarity when fused using Bi LSTM modelling, which outperforms existing PPD detection methods in terms of both efficiency and accuracy.

Search
Clear search
Close search
Google apps
Main menu