It is the cache of ${baseHref}. It is a snapshot of the page. The current page could have changed in the meantime.
Tip: To quickly find your search term on this page, press Ctrl+F or ⌘-F (Mac) and use the find bar.

Development an Automatic Speech to Facial Animation Conversion for Improve Deaf Lives | Hamidreza Kasaei | BRAIN. Broad Research in Artificial Intelligence and Neuroscience

BRAIN. Broad Research in Artificial Intelligence and Neuroscience, Vol 2, No 2 (2011)

Font Size:  Small  Medium  Large

Development an Automatic Speech to Facial Animation Conversion for Improve Deaf Lives

S. Hamidreza Kasaei, S. Mohammadreza Kasaei, S. Alireza Kasaei

Abstract


In this paper, we propose design and initial implementation of a robust system which can automatically translates voice into text and text to sign language animations. Sign Language
Translation Systems could significantly improve deaf lives especially in communications, exchange of information and employment of machine for translation conversations from one language to another has. Therefore, considering these points, it seems necessary to study the speech recognition. Usually, the voice recognition algorithms address three major challenges. The first is extracting feature form speech and the second is when limited sound gallery are available for recognition, and the final challenge is to improve speaker dependent to speaker independent voice recognition. Extracting feature form speech is an important stage in our method. Different procedures are available for extracting feature form speech. One of the commonest of which used in speech
recognition systems is Mel-Frequency Cepstral Coefficients (MFCCs). The algorithm starts with preprocessing and signal conditioning. Next extracting feature form speech using Cepstral coefficients will be done. Then the result of this process sends to segmentation part. Finally recognition part recognizes the words and then converting word recognized to facial animation. The project is still in progress and some new interesting methods are described in the current report.

Full Text: PDF

(C) 2010 Broad Research - international virtual organization & EduSoft Publishing