Current research projects include several neurologically focused projects (quantitative characterization of prosody in autism, instrumental approaches to neurogenic speech disorders, automated neuropsychological assessment), as well as basic speech technology research.
Meant to aid and help diagnose people with Autistic Spectrum Disorder (ASD).
"The expression of affect in face-to-face situations requires the ability to generate a complex, coordinated, cross-modal affective signal, having gesture, facial expression, vocal prosody, and language content modalities. This ability is compromised in neurological disorders such as Parkinson's disease and autism spectrum disorder (ASD). The PI's long term goal is to build computer-based interactive, agent based systems for remediation of poor affect communication and diagnosis of the underlying neurological disorders based on analysis of affective signals. A requirement for such systems is technology to detect atypical patterns in affective signals. The objective of this project is to develop that technology. Toward that end the PI will develop a play situation for eliciting affect, will collect audio-visual data from approximately 60 children between the ages of 4-7 years old, half of them with ASD and the other half constituting a control group of typically developing children. The PI will label the data on relevant affective dimensions, will develop algorithms for the analysis of affective incongruity, and will then test the algorithms against the labeled data in order to determine their ability to differentiate between ASD and typical development. While automatic methods for cross-modal recognition of discrete affect classes already have yielded promising results, automatic detection and quantification of atypical patterns in affective signals, and the ability to do so in semi-natural interactive situations, is unexplored territory. The PI expects this research will lead to new methods for affect recognition based on facial affective features (with special emphasis on facial frontalization algorithms and on modeling of facial expressive dynamics), vocal affective features, and lexical affective features, as well as to new methods for automated measurement of cross-modal affective incongruity."
The goal of the software is to develop a synthetic voice for an Augmentative and Alternative Communication system that sounds like the individual using the system (before they lost the ability to speak), without requiring very much recorded data on the part of the original talker.
This software is meant to aid in diagnosis of infants with Autistic Spectrum Disorder (ASD) by quantifying and analyzing symptoms of the disorder.
Meant to aid understanding of speech by people who are dysarthric.
This software provides new approaches which go beyond filtering speech signals, to analyzing it at acoustic, articulatory, phonetic, and linguistic levels.
Meant as an aid in assesment of Attention Deficit Hyperactivity Disorder (ADHD). Software/computerized system includes these features: "a clear understanding of which neuropsychological functions are measured, interactivity (the computer adapts its behavior instantly to the subjects' responses, thereby being able to operate at a level of optimal sensitivity), instantaneous and timed measurement of a range of behavioral responses including the force dynamics of button pushing and eye movements, mathematical modeling of the underlying cognitive processes in order to derive purer measures of the neuropsychological functions."
Software is an improvement and expansion on Listening Skills Tutor program, developed by researchers at the Center for Spoken Language Understanding at Oregon Health & Science University and specialists at the Tucker-Maxon Oral School.
An "algorithm in the area of text-to-speech synthesis (TTS) that will lead to (i) dramatic decreases in disk and memory requirements at a given speech quality level and (ii) minimization of the amount of voice recordings needed to create a new synthetic voice."
An aid in the investigation of prosody in people with Autistic Spectrum Disorder (ASD). Software consists of computer-based speech and language technologies for quantifying expressive prosody, for computing dialogue structure, and for generating acoustically controlled speech stimuli for measuring receptive prosody.
This technology performs a voice transplant of a child's natural voice onto the AAC device, so that the device's voice will sound like the child. An AAC device with a personalized voice that mimics the child's voice will psychologically reinforce powerful motivational factors and a sense of owness for communication so that the frequency and richness of AAC use, and its acceptance by family members and friends, will be enhanced. In addition, as a tool for improving a child's speech capabilities, a system that speaks with a voice similar to the child's own voice is more effective than a system that speaks with a default synthetic voice because the computer provides a model that is closer to the child's speech and hence is easier to emulate by the child. Meant for use by children with Autistic Spectrum Disorder (ASD).