by guntisbarzdins | Jun 30, 2022 | NLP
Training large neural models for speech and language processing (NLP), requires not only a lot of data input (read here on why and here on how SELMA is handling this ) but also a lot of computing resources. Nowadays, a fair share of computing resources are...
by kseniaskriptchenko | May 26, 2022 | AI, News, HLT
Machine learning requires large quantities of labeled training data (for more insights, read more in this post). That means, in order to reach acceptable performance, current speech recognition systems training demands thousands of hours of transcribed speech. For...
by Tugtekin Turan | Dec 24, 2021 | News, HLT
What does machine learning in the language field have to do with a cake, you might ask yourself? And how does it come that in the end we can produce better subtitles? Don’t look any further- read on! Let them be cake! Facebook’s AI Director Yann LeCun...
by Pedro Ferreira | Sep 9, 2021 | NER, News, HLT
As the world moves faster, more and more information is generated every day. In order to make sense of such large amounts of information, journalists and media monitors benefit from automatic processes that are capable of extracting entities present in the text....
by kseniaskriptchenko | Jul 20, 2021 | News, HLT
Making “sense” of speech with Curriculum Learning Methods Humans need about two decades to be trained as fully functional adults of our society. Quite some time and still pretty fast in contrast to where we are, if we want to copy the learning curve from a human to a...