Pavel Osokin is the Co-Founder & CEO of AMAI, a a San Francisco-based startup that produces AI voice engines. Pavel is leading the operation and strategy of Amai with a professional ambition to install its voice technology into every phone in the world. In AMAI they developed an AI voice that could not be discerned from a real human speech by 97% of users.
You’ve been a lifelong entrepreneur having launched your first company at age 13, what was your first attempt at business and what do you feel motivated this entrepreneurial mindset?
I didn’t really call it a company, but I made my first money by reselling some things or just washing cars on the street with a bucket. My motivation was that I wanted a Coke or a Snickers, and my parents did not have any money. I could either wait for the money to appear or earn it myself. Waiting does not appeal to me.
Could you share the genesis story behind AMAI?
I asked my partner, “What do companies around the world need?” In that conversation, I realized that every business is looking for a “sale.” We started making robots that could correspond with customers and sell products via mail and messengers. On the other hand, it wasn’t something particularly new as there are many chatbots available. So, we thought that if these robots could also make calls, it would be cool. As there were few good solutions on the market, we created a prototype of our own synthesized voice, and after the first sales, abandoned the robot and focused on TTS.
What does AMAI stand for specifically?
This stands for I’m AI (I’m artificial intelligence).
Could you discuss some of the challenges behind designing state of the art Text-to-speech technology?
Designing state-of-the art TTS offers several challenges. The first one is collecting datasets. Training a neural network requires female and male voices of varying ages, and the more, the better. Second, you need to achieve a very close resemblance to a natural voice. The best method is to test different machine learning models and to constantly experiment with different cases of voice usage: in particular, you need to find the most problematic sample and process it separately. Speaking of long-term challenges, it can be difficult to assess whether the voice has become better or worse, and in what direction it should be improved.
What are some of the challenges behind speech recognition when it comes to humans interacting with the AMAI voice AI?
There are hundreds of companies working on voice recognition because it is easier to develop. The problem that currently has no solution is recognition of a child’s voice. Children have many characteristics of speech at a young age, so it is hard to take all of them into account. Nonetheless, we’ve been working on a solution to this problem, and we are very close to announcing the result – so soon, our AI won’t have any problems interacting not just with adults, but also with children.
What are some popular use cases for AMAI?
Right now, it’s audiobook dubbing and enterprise use in call centers.
What languages are currently offered, and what languages are currently being worked on?
Our multi-speaker system includes two languages, Russian and English. The idea is that a voice created in one language can speak all the other languages in our model as well. Currently, we are collecting data for 40 more languages, and very soon we will have 42.
What’s your vision for the future of AI voice assistants?
It is my belief that voice assistants will move into the metaverse, and we are studying these opportunities now. If you integrate the assistant with smart speakers or the web browser, more people will use voice search and interact with the assistant every day. You can talk to your refrigerator or TV.
Is there anything else that you would like to share about AMAI?
AMAI uses only its own proprietary technologies.
Thank you for the interview, readers who wish to learn more should visit AMAI.
Credit: Source link