This article was published 8 yearsago

Facebook

Facebook’s AI research lab has updated fastText. The number of languages supported by the library is getting tripled along with enhancements that bring reductions to the model size and memory demand.

In case you are unaware of fastText, it is a fast, open source text classification library that makes it easy for developers to come up with tools that make of language analysis. This can come in handy in situations where the language of the content needs to be understood and analyzed. For instance, if you are building a tool that will recognize and thwart clickbait headlines or filter spa, an understanding of the language is required,

The library was first released with support for 90 languages including per-trained word vectors. The number has since gone up and today’s update takes it to 294. However, while the team designing the library wanted it able to run on a wide variety of hardware the requirement of a few GBs of memory ensured that the library could not be run on mobile.

That is set to change too. Facebook has now reduced the memory requirements to just a few hundred KBs. They were able to do this by optimizing the comparison of a bunch of vectors and thus reducing the memory demands.

As per Facebook researchers:

A few key ingredients, namely feature pruning, quantization, hashing, and re-training, allow us to produce text classification models with tiny size, often less than 100kB when trained on several popular datasets, without noticeably sacrificing accuracy or speed.

The researchers believe that the memory requirements may be reduced even further in the future. However, maintaining the accuracy is equally important and that is why, research is going to take a cautious approach. However, you can now rock the library on your mobile device – thanks to its latest size reduction.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.