Language modeling, which teaches text prediction via machine learning, is a pretty complex mechanism, specially when utilized to produce context based text predictions. While we have models like Google’s BERT, which are quite good at this, their bidirectional mechanism, which relies on left-of-word and right-of-word context for predictions, are not very advisable for natural language generation.
Working on this loophole, Microsoft has devised its UNIfied pre-trained Language Module (UniLM), which provides language modeling with an alternate approach. It operates on a combined mechanism of unidirectional, sequence-to-sequence, and bidirectional prediction tasks, which can be utilized for both understanding and language generation purposes. Microsoft claims that their AI achieved state-of-the-art results on a sampling of abstract summarization, generative question answering, and language generation data sets, proving it at par with BRET is all key aspects.
UniLM is essentially a stack comprised of Transformer AI model trained on huge chunk of text, calibrated for language modeling.
UniLM has been trained using English Wikipedia and the open source BookCorpus, providing it with a huge vocabulary of 28,996 words. Researchers say that the AI performed substantially well across all the tasks it was to be measured at. It was further stated that the results obtained where as good as those of BERT on the GLUE. Researchers further mentioned that UniLM surpassed all previous state-of-the-art models on five natural language generation data sets, which include CNN/DailyMail, Gigaword , SQuAD, CoQA, and DSTC7.
The team is further looking to expand the capabiities of UniLEM, by training it further on more complex modules, and including cross-lingual tasks with it.
You can check out the pre trained model and codes on GitHub.