Your browser doesn't support javascript.
loading
Speech Motion Anomaly Detection via Cross-Modal Translation of 4D Motion Fields from Tagged MRI.
Liu, Xiaofeng; Xing, Fangxu; Zhuo, Jiachen; Stone, Maureen; Prince, Jerry L; El Fakhri, Georges; Woo, Jonghye.
Afiliación
  • Liu X; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA.
  • Xing F; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA.
  • Zhuo J; Dept. of Radiology, University of Maryland School of Medicine, Baltimore, MD 21201 USA.
  • Stone M; Dept. of Neural and Pain Sciences, University of Maryland School of Dentistry, Baltimore, MD 21201 USA.
  • Prince JL; Dept. of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218 USA.
  • El Fakhri G; Dept. of Radiology and Biomedical Imaging, Yale University, New Heaven, CT 06519, USA.
  • Woo J; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA.
Article en En | MEDLINE | ID: mdl-39238547
ABSTRACT
Understanding the relationship between tongue motion patterns during speech and their resulting speech acoustic outcomes-i.e., articulatory-acoustic relation-is of great importance in assessing speech quality and developing innovative treatment and rehabilitative strategies. This is especially important when evaluating and detecting abnormal articulatory features in patients with speech-related disorders. In this work, we aim to develop a framework for detecting speech motion anomalies in conjunction with their corresponding speech acoustics. This is achieved through the use of a deep cross-modal translator trained on data from healthy individuals only, which bridges the gap between 4D motion fields obtained from tagged MRI and 2D spectrograms derived from speech acoustic data. The trained translator is used as an anomaly detector, by measuring the spectrogram reconstruction quality on healthy individuals or patients. In particular, the cross-modal translator is likely to yield limited generalization capabilities on patient data, which includes unseen out-of-distribution patterns and demonstrates subpar performance, when compared with healthy individuals. A one-class SVM is then used to distinguish the spectrograms of healthy individuals from those of patients. To validate our framework, we collected a total of 39 paired tagged MRI and speech waveforms, consisting of data from 36 healthy individuals and 3 tongue cancer patients. We used both 3D convolutional and transformer-based deep translation models, training them on the healthy training set and then applying them to both the healthy and patient testing sets. Our framework demonstrates a capability to detect abnormal patient data, thereby illustrating its potential in enhancing the understanding of the articulatory-acoustic relation for both healthy individuals and patients.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc SPIE Int Soc Opt Eng Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc SPIE Int Soc Opt Eng Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos