Facebook Boosts Cross-Lingual Language Model Pretraining Performance | Synced

Facebook researchers have introduced two new methods for pretraining cross-lingual language models (XLMs). The unsupervised method uses monolingual data, while the supervised version leverages para...

By · · 1 min read

Source: Synced | AI Technology & Industry Review

Facebook researchers have introduced two new methods for pretraining cross-lingual language models (XLMs). The unsupervised method uses monolingual data, while the supervised version leverages parallel data with a new cross-lingual language model.