A team of researchers at Carnegie Mellon University is looking to expand automatic speech recognition to 2,000 languages. As of right now, only a portion of the estimated 7,000 to 8,000 spoken languages around the world would benefit from modern language technologies like voice-to-text transcription or automatic captioning.
Xinjian Li is a Ph.D. student in the School of Computer Science’s Language Technologies Institute (LTI).
“A lot of people in this world speak diverse languages, but language technology tools aren’t being developed for all of them,” he said. “Developing technology and a good language model for all people is one of the goals of this research.”
Li belongs to a team of experts looking to simplify the data requirements languages need to develop a speech recognition model.
The team also includes LTI faculty members Shinji Watanabe, Florian Metze, David Mortensen and Alan Black.
The research titled “ASR2K: Speech Recognition for Around 2,000 Languages Without Audio” was presented at Interspeech 2022 in South Korea.
A majority of the existing speech recognition models require text and audio data sets. While text data exists for thousands of languages, the same is not true for audio. The team wants to eliminate the need for audio data by focusing on linguistic elements that are common across many languages.
Speech recognition technologies normally focus on a language’s phoneme, which are distinct sounds that distinguish it from other languages. These are unique to each language. At the same time, languages have phones that describe how a word sounds physically, and multiple phones can correspond to a single phoneme. While separate languages can have different phonemes, the underlying phones could be the same.
The team is working on a speech recognition model that relies less on phonemes and more on information about how phones are shared between languages. This helps reduce the effort needed to build separate models for each individual language. By pairing the model with a phylogenetic tree, which is a diagram that maps the relationships between languages, it helps with pronunciation rules. The team’s model and the tree structure have enabled them to approximate the speech model for thousands of languages even without audio data.
“We are trying to remove this audio data requirement, which helps us move from 100 to 200 languages to 2,000,” Li said. “This is the first research to target such a large number of languages, and we’re the first team aiming to expand language tools to this scope.”
The research, while still in an early stage, has improved existing language approximation tools by 5%.
“Each language is a very important factor in its culture. Each language has its own story, and if you don’t try to preserve languages, those stories might be lost,” Li said. “Developing this kind of speech recognition system and this tool is a step to try to preserve those languages.”
Credit: Source link