Skip to main content

Classic Word Embeddings

Classic word embeddings are static and word-level, meaning that each distinct word gets exactly one pre-computed embedding. Most embeddings fall under this class, including the popular GloVe or Komninos embeddings.

Simply instantiate the WordEmbeddings class and pass a string identifier of the embedding you wish to load. So, if you want to use GloVe embeddings, pass the string 'glove' to the constructor:

from flair.embeddings import WordEmbeddings

# init embedding
glove_embedding = WordEmbeddings('glove')

Now, create an example sentence and call the embedding's embed() method. You can also pass a list of sentences to this method since some embedding types make use of batching to increase speed.

# create sentence.
sentence = Sentence('The grass is green .')

# embed a sentence using glove.
glove_embedding.embed(sentence)

# now check out the embedded tokens.
for token in sentence:
print(token)
print(token.embedding)

This prints out the tokens and their embeddings. GloVe embeddings are Pytorch vectors of dimensionality 100.

You choose which pre-trained embeddings you load by passing the appropriate id string to the constructor of the WordEmbeddings class. Typically, you use the two-letter language code to init an embedding, so 'en' for English and 'de' for German and so on. By default, this will initialize FastText embeddings trained over Wikipedia. You can also always use FastText embeddings over Web crawls, by instantiating with '-crawl'. So 'de-crawl' to use embeddings trained over German web crawls.

For English, we provide a few more options, so here you can choose between instantiating 'en-glove', 'en-extvec' and so on.

The following embeddings are currently supported:

IDLanguageEmbedding
'en-glove' (or 'glove')EnglishGloVe embeddings
'en-extvec' (or 'extvec')EnglishKomninos embeddings
'en-crawl' (or 'crawl')EnglishFastText embeddings over Web crawls
'en-twitter' (or 'twitter')EnglishTwitter embeddings
'en-turian' (or 'turian')EnglishTurian embeddings (small)
'en' (or 'en-news' or 'news')EnglishFastText embeddings over news and wikipedia data
'de'GermanGerman FastText embeddings
'nl'DutchDutch FastText embeddings
'fr'FrenchFrench FastText embeddings
'it'ItalianItalian FastText embeddings
'es'SpanishSpanish FastText embeddings
'pt'PortuguesePortuguese FastText embeddings
'ro'RomanianRomanian FastText embeddings
'ca'CatalanCatalan FastText embeddings
'sv'SwedishSwedish FastText embeddings
'da'DanishDanish FastText embeddings
'no'NorwegianNorwegian FastText embeddings
'fi'FinnishFinnish FastText embeddings
'pl'PolishPolish FastText embeddings
'cz'CzechCzech FastText embeddings
'sk'SlovakSlovak FastText embeddings
'sl'SlovenianSlovenian FastText embeddings
'sr'SerbianSerbian FastText embeddings
'hr'CroatianCroatian FastText embeddings
'bg'BulgarianBulgarian FastText embeddings
'ru'RussianRussian FastText embeddings
'ar'ArabicArabic FastText embeddings
'he'HebrewHebrew FastText embeddings
'tr'TurkishTurkish FastText embeddings
'fa'PersianPersian FastText embeddings
'ja'JapaneseJapanese FastText embeddings
'ko'KoreanKorean FastText embeddings
'zh'ChineseChinese FastText embeddings
'hi'HindiHindi FastText embeddings
'id'IndonesianIndonesian FastText embeddings
'eu'BasqueBasque FastText embeddings

So, if you want to load German FastText embeddings, instantiate as follows:

german_embedding = WordEmbeddings('de')

Alternatively, if you want to load German FastText embeddings trained over crawls, instantiate as follows:

german_embedding = WordEmbeddings('de-crawl')

We generally recommend the FastText embeddings, or GloVe if you want a smaller model.

If you want to use any other embeddings (not listed in the list above), you can load those by calling

custom_embedding = WordEmbeddings('path/to/your/custom/embeddings.gensim')

If you want to load custom embeddings you need to make sure that the custom embeddings are correctly formatted to gensim.

You can, for example, convert FastText embeddings to gensim using the following code snippet:

import gensim

word_vectors = gensim.models.KeyedVectors.load_word2vec_format('/path/to/fasttext/embeddings.txt', binary=False)
word_vectors.save('/path/to/converted')

However, FastText embeddings have the functionality of returning vectors for out of vocabulary words using the sub-word information. If you want to use this then try FastTextEmbeddings class.