flair.embeddings.base#

class flair.embeddings.base.EmbeddingsView on GitHub#

Bases: Module, Generic[DT]

Abstract base class for all embeddings. Every new type of embedding must implement these methods.

embeddings_name: str#
__init__()View on GitHub#

Set some attributes that would otherwise result in errors. Overwrite these in your embedding class.

abstract property embedding_length: int#

Returns the length of the embedding vector.

abstract property embedding_type: str#
embed(data_points)View on GitHub#

Add embeddings to all words in a list of sentences.

If embeddings are already added, updates only if embeddings are non-static.

Return type:

List[TypeVar(DT, bound= DataPoint)]

abstract _add_embeddings_internal(sentences)View on GitHub#

Private method for adding embeddings to all words in a list of sentences.

get_names()View on GitHub#

Returns a list of embedding names.

In most cases, it is just a list with one item, namely the name of this embedding. But in some cases, the embedding is made up by different embeddings (StackedEmbedding). Then, the list contains the names of all embeddings in the stack.

Return type:

List[str]

get_named_embeddings_dict()View on GitHub#
Return type:

Dict

static get_instance_parameters(locals)View on GitHub#
Return type:

dict

classmethod from_params(params)View on GitHub#
Return type:

Embeddings

to_params()View on GitHub#
Return type:

Dict[str, Any]

classmethod load_embedding(params)View on GitHub#
save_embeddings(use_state_dict=True)View on GitHub#
class flair.embeddings.base.ScalarMix(mixture_size, trainable=False)View on GitHub#

Bases: Module

Mixes several tensors by a learned weighting.

Computes a parameterised scalar mixture of N tensors. This method was proposed by Liu et al. (2019) in the paper: “Linguistic Knowledge and Transferability of Contextual Representations” (https://arxiv.org/abs/1903.08855)

The implementation is copied and slightly modified from the allennlp repository and is licensed under Apache 2.0. It can be found under: allenai/allennlp.

__init__(mixture_size, trainable=False)View on GitHub#

Inits scalar mix implementation.

mixture = gamma * sum(s_k * tensor_k) where s = softmax(w), with w and gamma scalar parameters.

Parameters:
  • mixture_size (int) – size of mixtures (usually the number of layers)

  • trainable (bool) – weather or not the weights should be learnable.

forward(tensors)View on GitHub#

Forward pass of scalar mix.

Computes a weighted average of the tensors. The input tensors an be any shape with at least two dimensions, but must all be the same shape.

Parameters:

tensors (List[Tensor]) – list of input tensors

Return type:

Tensor

Returns: computed weighted average of input tensors

training: bool#
class flair.embeddings.base.DocumentEmbeddingsView on GitHub#

Bases: Embeddings[Sentence]

Abstract base class for all document-level embeddings. Every new type of document embedding must implement these methods.

property embedding_type: str#
embeddings_name: str#
name: str#
training: bool#
class flair.embeddings.base.TokenEmbeddingsView on GitHub#

Bases: Embeddings[Sentence]

Abstract base class for all token-level embeddings. Ever new type of word embedding must implement these methods.

property embedding_type: str#
embeddings_name: str#
name: str#
training: bool#
flair.embeddings.base.register_embeddings(*args)View on GitHub#
flair.embeddings.base.load_embeddings(params)View on GitHub#
Return type:

Embeddings