Skip to main content
SearchLoginLogin or Signup

Review 2: "The Acoustic Dissection of Cough: Diving into Machine Listening-based COVID-19 Analysis and Detection"

This preprint reports on a machine learning model for detecting COVID-19 by analyzing patients’ cough sounds. Reviewers deemed the findings potentially informative and promising, with a few limitations that could be addressed.

Published onApr 07, 2022
Review 2: "The Acoustic Dissection of Cough: Diving into Machine Listening-based COVID-19 Analysis and Detection"
1 of 2
key-enterThis Pub is a Review of
The Acoustic Dissection of Cough: Diving into Machine Listening-based COVID-19 Analysis and Detection
Description

AbstractPurposeThe coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19’s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge on the acoustic characteristics of COVID-19 cough sounds is limited, but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds.MethodsWith the theory of computational paralinguistics, we analyse the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i. e., a standardised set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions.ResultsThe experimental results demonstrate that a set of acoustic parameters of cough sounds, e. g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, are relevant for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our automatic COVID-19 detection model performs significantly above chance level, i. e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201).ConclusionsBased on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection model, we find that the machine learning method to a certain extent relies on acoustic features showing higher effects in conventional group difference testing.

RR:C19 Evidence Scale rating by reviewer:

  • Reliable. The main study claims are generally justified by its methods and data. The results and conclusions are likely to be similar to the hypothetical ideal study. There are some minor caveats or limitations, but they would/do not change the major claims of the study. The study provides sufficient strength of evidence on its own that its main claims should be considered actionable, with some room for future revision.

***************************************

Review:

This study adds to the growing body of evidence on the use of bioacoustic signal analysis and machine learning methods for the detection of COVID-19. The paper focuses on the analysis of cough sounds, assessing the predictive power of several acoustic features previously used in the field of computational paralinguistics. The authors claim that machine learning models trained on data consisting of such acoustic features extracted from recordings of patients' coughs can distinguish between cough samples of symptomatic and asymptomatic COVID-19 patients and controls. They identify a subset of these acoustic features which contribute the most to the model's predictive power, finding some consistency between features identified through machine learning and those identified by conventional statistical testing. Based on RR:C19’s Strength of Evidence Scale, these claims are reliable, generally supported by the data and methods used, and therefore actionable with some limitations.

The paper presents a good, though not an exhaustive review of the growing literature on COVID-19 detection based on speech and cough sounds. However, the main contribution of the study lies not in improvements in prediction accuracy over the state of the art, but in its detailed analysis of cough sounds in terms of acoustic features as regards COVID-19 detection, based on a carefully curated publicly available data set. A subset of the COUGHVID (Orlandic et al. 2021) data set was selected to support three binary prediction tasks: COVID-19 positive versus negative, symptomatic COVID-19 versus symptomatic COVID-19 negative patients, and asymptomatic COVID-19 positive versus asymptomatic controls. Overall, the models achieve moderate discriminatory power, with the area under the receiver operating characteristic curve (AUC) ranging between 0.61 and 0.67 for the best models for each task.

The analysis of the bioacoustic features is generally informative, showing a comparison between the assessment of the discriminative power of their low-level descriptors in terms of conventional non-parametric statistical testing and machine learning scores. The methods are well described, and the results are carefully presented, summarised, and discussed in relation to features found to be relevant in other studies. The authors, however, do not make an attempt to discuss the implications of these findings in relation to COVID-19 pathology or their potential clinical implications, which limits this otherwise interesting contribution.

Although the authors made great efforts to improve the reliability of the data, the data set remains a major limitation of the study. The authors acknowledge as much in a separate section of the paper, pointing out that the self-reported nature of the data collection procedure makes it impossible to verify the accuracy of the participant's status. Although the risk of selection bias is mentioned, the paper lacks a proper discussion of age bias, which would seem important given that the data consist largely of samples from participants in their 20s and 30s. While this and the relatively low AUC scores undermine the authors' claim regarding the usefulness of the machine learning models in clinical settings, the results show promise, highlighting an area that deserves further research.

Orlandic, L., Teijeiro, T. & Atienza, D. The COUGHVID crowdsourcing dataset, a corpus for the study of large-scale cough analysis algorithms. Sci Data 8, 156 (2021).

Connections
1 of 3
Comments
2
Vape Factory:

https://data.longan.gov.vn/uploads/user/2023-06-12-130433.244806555.html

https://ckan.app.ecocommons.org.au/uploads/user/2023-06-12-111715.214197333.html

http://www.nativehawaiiandataportal.com/uploads/user/2023-06-12-123556.893810444.html

https://ckanpj.azurewebsites.net/uploads/user/2023-06-12-123259.675049444.html

http://ruraldados.pt/uploads/user/2023-06-12-111445.925091333.html

https://dadosabertos.tce.go.gov.br/uploads/user/2023-06-12-123213.121584444.html

https://datos-ckandev.cdmx.gob.mx/uploads/user/2023-06-12-112935.555169444.html

https://data.garutkab.go.id/uploads/user/2022-12-31-163007.729462new150.html

http://cdp.centralindia.cloudapp.azure.com/uploads/user/2023-06-12-102721.458865222.html

https://ckan.fcsc.develop.datopian.com/uploads/user/2023-06-12-101952.853550-222.html

https://ckan.jombangkab.go.id/uploads/user/2022-12-31-163839.656554new150.html

https://geokur-dmp.geo.tu-dresden.de/uploads/user/2022-12-31-163934.686238new149.html

http://museums.sgm.ru/uploads/user/2022-12-31-164112.556866new148.html

https://hubofdata.ru/uploads/user/2022-12-31-164406.904355new147.html

https://ckan-dlsc-dev.unl.edu/uploads/user/2022-12-31-164706.021210new146.html

https://portal.addferti.eu/uploads/user/2022-12-31-165450.293017new146.html

https://opendata.liberec.cz/uploads/user/2022-12-31-165606.867502new145.html

http://nrri-docker.d.umn.edu:20003/uploads/user/2022-12-31-165645.858115new145.html

https://ckan-dlsc-dev.unl.edu/uploads/user/2022-12-31-164706.021210new146.html

http://nrri-docker.d.umn.edu:20003/uploads/user/2022-12-31-165645.858115new145.html

https://hubofdata.ru/uploads/user/2022-12-31-164406.904355new147.html

http://kilimodata.org/uploads/user/2023-01-01-122830.158447new145.html

http://302948.vps.tornado.no/uploads/user/2023-01-01-122946.240557new144.html

http://159.89.51.85/uploads/user/2023-01-01-123703.243559new144.html

http://134.209.229.116/uploads/user/2023-01-01-123948.392414new143.html

https://storage.googleapis.com/dx-alan-turing-dev/alan-turing-dev/storage/uploads/user/2023-01-01-124640.905255new143.html

http://ckan.onizuka.co.jp/uploads/user/2023-01-01-125214.811032new143.html

https://www.openlanc.org/uploads/user/2023-01-01-141531.949345new142.html

http://3.113.247.170/uploads/user/2023-01-01-141833.639198new142.html

https://dw.tandoncsmart.com/uploads/user/2023-01-01-142234.700497new142.html

https://www.opentourism.net/uploads/user/2023-01-01-143418.653723new151.html

https://catalog2.gbdi.cloud/uploads/user/2023-01-01-144026.461927new151.html

https://canwin-datahub.ad.umanitoba.ca/data/uploads/user/2023-06-13-070852.388908000.html

https://www.jiem.org/files/journals/1/articles/5754/submission/original/5754-15977-1-SM.html

https://www.culturayrs.unam.mx/files/journals/1/articles/6066/submission/original/6066-50315-1-SM.html 

https://revistafarmaciahospitalaria.sefh.es/gdcr/files/journals/2/articles/13776/submission/original/13776-104657-1-SM.html

https://www.econa.org.ua/files/journals/1/articles/5648/submission/original/5648-6565682639-1-SM.html

http://www.ijates.org/files/journals/1/articles/648/submission/original/648-1496-1-SM.html

http://ene-enfermeria.org/ojs/files/journals/2/articles/2635/submission/original/2635-6135-1-SM.html

http://www.scholink.org/ojs/files/journals/45/articles/26530/submission/original/26530-229485-1-SM.html

http://advances.utc.sk/files/journals/1/articles/5279/submission/original/5279-488496747-1-SM.html

http://www.ecoforumjournal.ro/files/journals/1/articles/1933/submission/original/1933-5209-1-SM.html

https://www.publicacionesrade.es/files/journals/1/articles/381/submission/original/381-667-1-SM.html

http://journal.library.du.ac.bd/files/journals/13/articles/3237/submission/original/3237-5885-1-SM.html

http://sssppj.org/files/journals/1/articles/635/submission/635-1-1038-1-2-20230612.html

http://www.pjaec.pk/files/journals/1/articles/1188/submission/1188-1-2731-1-2-20230612.html

https://www.theclarion.in/files/journals/1/articles/221/submission/221-1-336-1-2-20230612.html

http://jmcjarj.org/files/journals/1/articles/135/submission/135-1-263-2-2-20230613.html

http://journals.pu.edu.pk/journals/index.php/pujm/comment/view/4565/0/36504

?
James Robrt:

Belt sanders are great for sanding down rough or uneven surfaces and can be used for both wood and metal. However, because they're so powerful, they can be difficult to control and may leave behind scratches or gouges if not used carefully. https://justsander.com/best-3x21-belt-sander/