Your browser doesn't support javascript.
loading
AUTOMATIC MEASUREMENT OF AFFECTIVE VALENCE AND AROUSAL IN SPEECH.
Asgari, Meysam; Kiss, Géza; van Santen, Jan; Shafran, Izhak; Song, Xubo.
Afiliación
  • Asgari M; Center for Spoken Language Understanding, Oregon Health & Science University.
  • Kiss G; Center for Spoken Language Understanding, Oregon Health & Science University.
  • van Santen J; Center for Spoken Language Understanding, Oregon Health & Science University.
  • Shafran I; Center for Spoken Language Understanding, Oregon Health & Science University.
  • Song X; Center for Spoken Language Understanding, Oregon Health & Science University.
Article en En | MEDLINE | ID: mdl-33642942
Methods are proposed for measuring affective valence and arousal in speech. The methods apply support vector regression to prosodic and text features to predict human valence and arousal ratings of three stimulus types: speech, delexicalized speech, and text transcripts. Text features are extracted from transcripts via a lookup table listing per-word valence and arousal values and computing per-utterance statistics from the per-word values. Prediction of arousal ratings of delexicalized speech and of speech from prosodic features was successful, with accuracy levels not far from limits set by the reliability of the human ratings. Prediction of valence for these stimulus types as well as prediction of both dimensions for text stimuli proved more difficult, even though the corresponding human ratings were as reliable. Text based features did add, however, to the accuracy of prediction of valence for speech stimuli. We conclude that arousal of speech can be measured reliably, but not valence, and that improving the latter requires better lexical features.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc IEEE Int Conf Acoust Speech Signal Process Año: 2014 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc IEEE Int Conf Acoust Speech Signal Process Año: 2014 Tipo del documento: Article Pais de publicación: Estados Unidos