flair.models#
- class flair.models.SpanClassifier(embeddings, label_dictionary, pooling_operation='first_last', label_type='nel', candidates=None, **classifierargs)View on GitHub#
Bases:
DefaultClassifier
[Sentence
,Span
]Entity Linking Model.
The model expects text/sentences with annotated entity mentions and predicts entities to these mentions. To this end a word embedding is used to embed the sentences and the embedding of the entity mention goes through a linear layer to get the actual class label. The model is able to predict ‘<unk>’ for entity mentions that the model can not confidently match to any of the known labels.
- __init__(embeddings, label_dictionary, pooling_operation='first_last', label_type='nel', candidates=None, **classifierargs)View on GitHub#
Initializes an EntityLinker.
- Parameters:
embeddings (
TokenEmbeddings
) – embeddings used to embed the tokens of the sentences.label_dictionary (
Dictionary
) – dictionary that gives ids to all classes. Should contain <unk>.pooling_operation (
str
) – either average, first, last or first_last. Specifies the way of how text representations of entity mentions (with more than one token) are handled. E.g. average means that as text representation we take the average of the embeddings of the token in the mention. first_last concatenates the embedding of the first and the embedding of the last token.label_type (
str
) – name of the label you use.candidates (
Optional
[CandidateGenerator
]) – If provided, use aCandidateGenerator
for prediction candidates.**classifierargs – The arguments propagated to
flair.nn.DefaultClassifier.__init__()
- emb_first(span, embedding_names)View on GitHub#
- emb_last(span, embedding_names)View on GitHub#
- emb_firstAndLast(span, embedding_names)View on GitHub#
- emb_mean(span, embedding_names)View on GitHub#
- property label_type#
Each model predicts labels of a certain type.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.LanguageModel(dictionary, is_forward_lm, hidden_size, nlayers, embedding_size=100, nout=None, document_delimiter='\\n', dropout=0.1, recurrent_type='LSTM', has_decoder=True)View on GitHub#
Bases:
Module
Container module with an encoder, a recurrent module, and a decoder.
- init_weights()View on GitHub#
- forward(input, hidden, ordered_sequence_lengths=None, decode=True)View on GitHub#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_representation(strings, start_marker, end_marker, chars_per_chunk=512)View on GitHub#
- get_output(text)View on GitHub#
Wraps hidden states in new Variables, to detach them from their history.
- static initialize(matrix)View on GitHub#
- classmethod load_language_model(model_file, has_decoder=True)View on GitHub#
- classmethod load_checkpoint(model_file)View on GitHub#
- save_checkpoint(file, optimizer, epoch, split, loss)View on GitHub#
- save(file)View on GitHub#
- generate_text(prefix='\\n', number_of_characters=1000, temperature=1.0, break_on_suffix=None)View on GitHub#
- Return type:
Tuple
[str
,float
]
- calculate_perplexity(text)View on GitHub#
- Return type:
float
-
training:
bool
#
- class flair.models.Lemmatizer(embeddings=None, label_type='lemma', rnn_input_size=50, rnn_hidden_size=256, rnn_layers=2, encode_characters=True, char_dict='common-chars-lemmatizer', max_sequence_length_dependent_on_input=True, max_sequence_length=20, use_attention=True, beam_size=1, start_symbol_for_encoding=True, end_symbol_for_encoding=True, bidirectional_encoding=True)View on GitHub#
Bases:
Classifier
[Sentence
]- __init__(embeddings=None, label_type='lemma', rnn_input_size=50, rnn_hidden_size=256, rnn_layers=2, encode_characters=True, char_dict='common-chars-lemmatizer', max_sequence_length_dependent_on_input=True, max_sequence_length=20, use_attention=True, beam_size=1, start_symbol_for_encoding=True, end_symbol_for_encoding=True, bidirectional_encoding=True)View on GitHub#
Initializes a Lemmatizer model.
The model consists of a decoder and an encoder. The encoder is either a RNN-cell (torch.nn.GRU) or a Token-Embedding from flair if an embedding is handed to the constructor (token_embedding). The output of the encoder is used as the initial hidden state to the decoder, which is an RNN-cell (GRU) that predicts the lemma of the given token one letter at a time. Note that one can use data in which only those words are annotated that differ from their lemma or data in which all words are annotated with a (maybe equal) lemma.
- Parameters:
encode_characters (
bool
) – If True, use a character embedding to additionally encode tokens per character.start_symbol_for_encoding (
bool
) – If True, use a start symbol for encoding characters.end_symbol_for_encoding (
bool
) – If True, use an end symbol for encoding characters.bidirectional_encoding (
bool
) – If True, the character encoding is bidirectional.embeddings (
Optional
[TokenEmbeddings
]) – Embedding used to encode sentencernn_input_size (
int
) – Input size of the RNN(‘s). Each letter of a token is represented by a hot-one-vector over the given character dictionary. This vector is transformed to a input_size vector with a linear layer.rnn_hidden_size (
int
) – size of the hidden state of the RNN(‘s).rnn_layers (
int
) – Number of stacked RNN cellsbeam_size (
int
) – Number of hypothesis used when decoding the output of the RNN. Only used in prediction.char_dict (
Union
[str
,Dictionary
]) – Dictionary of characters the model is able to process. The dictionary must contain <unk> for the handling of unknown characters. If None, a standard dictionary will be loaded. One can either hand over a path to a dictionary or the dictionary itself.label_type (
str
) – Name of the gold labels to use.max_sequence_length_dependent_on_input (
bool
) – If set to True, the maximum length of a decoded sequence in the prediction depends on the sentences you want to lemmatize. To be precise the maximum length is computed as the length of the longest token in the sentences plus one.max_sequence_length (
int
) – If set to True and max_sequence_length_dependend_on_input is False a fixed maximum length for the decoding will be used for all sentences.use_attention (
bool
) – whether to use attention. Only sensible if encoding via RNN
- property label_type#
Each model predicts labels of a certain type.
- words_to_char_indices(tokens, end_symbol=True, start_symbol=False, padding_in_front=False, seq_length=None)View on GitHub#
For a given list of strings this function creates index vectors that represent the characters of the strings.
Each string is represented by sequence_length (maximum string length + entries for special symbol) many indices representing characters in self.char_dict. One can manually set the vector length with the parameter seq_length, though the vector length is always at least maximum string length in the list.
- Parameters:
seq_length – the maximum sequence length to use, if None the maximum is taken..
tokens (
List
[str
]) – the texts of the toekens to encodeend_symbol – add self.end_index at the end of each representation
start_symbol – add self.start_index in front of each representation
padding_in_front – whether to fill up with self.dummy_index in front or in back of strings
- forward_pass(sentences)View on GitHub#
- decode(decoder_input_indices, initial_hidden_states, all_encoder_outputs)View on GitHub#
- forward(encoder_input_indices, lengths, token_embedding_hidden)View on GitHub#
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Optional
[Tensor
]]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- encode(sentences)View on GitHub#
- encode_token(token)View on GitHub#
- forward_loss(sentences)View on GitHub#
Performs a forward pass and returns a loss tensor for backpropagation.
Implement this to enable training.
- Return type:
Tuple
[Tensor
,int
]
- predict(sentences, mini_batch_size=16, return_probabilities_for_all_classes=False, verbose=False, label_name='predicted', return_loss=False, embedding_storage_mode='none')View on GitHub#
Predict lemmas of words for a given (list of) sentence(s).
- Parameters:
sentences (
Union
[List
[Sentence
],Sentence
]) – sentences to predictlabel_name – label name used for predicted lemmas
mini_batch_size (
int
) – number of tokens that are send through the RNN simultaneously, assuming batching_in_rnn is set to Trueembedding_storage_mode – default is ‘none’ which is always best. Only set to ‘cpu’ or ‘gpu’ if you wish to not only predict, but also keep the generated embeddings in CPU or GPU memory respectively.
return_loss – whether to compute and return loss. Setting it to True only makes sense if labels are provided
verbose (
bool
) – If True, lemmatized sentences will be printed in the console.return_probabilities_for_all_classes (
bool
) – unused parameter.
- evaluate(*args, **kwargs)View on GitHub#
Evaluates the model. Returns a Result object containing evaluation results and a loss value.
Implement this to enable evaluation.
- Parameters:
data_points – The labeled data_points to evaluate.
gold_label_type – The label type indicating the gold labels
out_path – Optional output path to store predictions
embedding_storage_mode – One of ‘none’, ‘cpu’ or ‘gpu’. ‘none’ means all embeddings are deleted and freshly recomputed, ‘cpu’ means all embeddings are stored on CPU, or ‘gpu’ means all embeddings are stored on GPU
mini_batch_size – The batch_size to use for predictions
main_evaluation_metric – Specify which metric to highlight as main_score
exclude_labels – Specify classes that won’t be considered in evaluation
gold_label_dictionary – Specify which classes should be considered, all other classes will be taken as <unk>.
return_loss – Weather to additionally compute the loss on the data-points.
**kwargs – Arguments that will be ignored.
- Return type:
Result
- Returns:
The evaluation results.
- class flair.models.TextPairClassifier(embeddings, label_type, embed_separately=False, **classifierargs)View on GitHub#
Bases:
DefaultClassifier
[DataPair
[Sentence
,Sentence
],DataPair
[Sentence
,Sentence
]]Text Pair Classification Model for tasks such as Recognizing Textual Entailment, build upon TextClassifier.
The model takes document embeddings and puts resulting text representation(s) into a linear layer to get the actual class label. We provide two ways to embed the DataPairs: Either by embedding both DataPoints and concatenating the resulting vectors (“embed_separately=True”) or by concatenating the DataPoints and embedding the resulting vector (“embed_separately=False”).
- __init__(embeddings, label_type, embed_separately=False, **classifierargs)View on GitHub#
Initializes a TextPairClassifier.
- Parameters:
label_type (
str
) – label_type: name of the labelembed_separately (
bool
) – if True, the sentence embeddings will be concatenated, if False both sentences will be combined and newly embedded.embeddings (
DocumentEmbeddings
) – embeddings used to embed each data pointlabel_dictionary – dictionary of labels you want to predict
multi_label – auto-detected by default, but you can set this to True to force multi-label prediction or False to force single-label prediction
multi_label_threshold – If multi-label you can set the threshold to make predictions
loss_weights – Dictionary of weights for labels for the loss function. If any label’s weight is unspecified it will default to 1.0
**classifierargs – The arguments propagated to
flair.nn.DefaultClassifier.__init__()
- property label_type#
Each model predicts labels of a certain type.
- get_used_tokens(corpus)View on GitHub#
- Return type:
Iterable
[List
[str
]]
- class flair.models.TextPairRegressor(embeddings, label_type, embed_separately=False, dropout=0.0, locked_dropout=0.0, word_dropout=0.0, decoder=None)View on GitHub#
Bases:
Model
[DataPair
[Sentence
,Sentence
]],ReduceTransformerVocabMixin
Text Pair Regression Model for tasks such as Semantic Textual Similarity Benchmark.
The model takes document embeddings and puts resulting text representation(s) into a linear layer to get the score. We provide two ways to embed the DataPairs: Either by embedding both DataPoints and concatenating the resulting vectors (“embed_separately=True”) or by concatenating the DataPoints and embedding the resulting vector (“embed_separately=False”).
- __init__(embeddings, label_type, embed_separately=False, dropout=0.0, locked_dropout=0.0, word_dropout=0.0, decoder=None)View on GitHub#
Initialize the Text Pair Regression Model.
- Parameters:
label_type (
str
) – name of the labelembed_separately (
bool
) – if True, the sentence embeddings will be concatenated, if False both sentences will be combined and newly embedded.dropout (
float
) – dropoutlocked_dropout (
float
) – locked_dropoutword_dropout (
float
) – word_dropoutdecoder (
Optional
[Module
]) – if provided, a that specific layer will be used as decoder, otherwise a linear layer with random parameters will be created.embeddings (
DocumentEmbeddings
) – embeddings used to embed each data point
- property label_type#
Each model predicts labels of a certain type.
- get_used_tokens(corpus)View on GitHub#
- Return type:
Iterable
[List
[str
]]
- forward_loss(pairs)View on GitHub#
Performs a forward pass and returns a loss tensor for backpropagation.
Implement this to enable training.
- Return type:
Tuple
[Tensor
,int
]
- predict(pairs, mini_batch_size=32, verbose=False, label_name=None, embedding_storage_mode='none')View on GitHub#
- evaluate(data_points, gold_label_type, out_path=None, embedding_storage_mode='none', mini_batch_size=32, main_evaluation_metric=('micro avg', 'f1-score'), exclude_labels=[], gold_label_dictionary=None, return_loss=True, **kwargs)View on GitHub#
Evaluates the model. Returns a Result object containing evaluation results and a loss value.
Implement this to enable evaluation.
- Parameters:
data_points (
Union
[List
[DataPair
[Sentence
,Sentence
]],Dataset
]) – The labeled data_points to evaluate.gold_label_type (
str
) – The label type indicating the gold labelsout_path (
Union
[str
,Path
,None
]) – Optional output path to store predictionsembedding_storage_mode (
str
) – One of ‘none’, ‘cpu’ or ‘gpu’. ‘none’ means all embeddings are deleted and freshly recomputed, ‘cpu’ means all embeddings are stored on CPU, or ‘gpu’ means all embeddings are stored on GPUmini_batch_size (
int
) – The batch_size to use for predictionsmain_evaluation_metric (
Tuple
[str
,str
]) – Specify which metric to highlight as main_scoreexclude_labels (
List
[str
]) – Specify classes that won’t be considered in evaluationgold_label_dictionary (
Optional
[Dictionary
]) – Specify which classes should be considered, all other classes will be taken as <unk>.return_loss (
bool
) – Weather to additionally compute the loss on the data-points.**kwargs – Arguments that will be ignored.
- Return type:
Result
- Returns:
The evaluation results.
- class flair.models.RelationClassifier(embeddings, label_dictionary, label_type, entity_label_types, entity_pair_labels=None, entity_threshold=None, cross_augmentation=True, encoding_strategy=<flair.models.relation_classifier_model.TypedEntityMarker object>, zero_tag_value='O', allow_unk_tag=True, **classifierargs)View on GitHub#
Bases:
DefaultClassifier
[EncodedSentence
,EncodedSentence
]Relation Classifier to predict the relation between two entities.
Task#
Relation Classification (RC) is the task of identifying the semantic relation between two entities in a text. In contrast to (end-to-end) Relation Extraction (RE), RC requires pre-labelled entities.
Example:#
For the founded_by relation from ORG (head) to PER (tail) and the sentence “Larry Page and Sergey Brin founded Google .”, we extract the relations - founded_by(head=’Google’, tail=’Larry Page’) and - founded_by(head=’Google’, tail=’Sergey Brin’).
Architecture#
The Relation Classifier Model builds upon a text classifier. The model generates an encoded sentence for each entity pair in the cross product of all entities in the original sentence. In the encoded representation, the entities in the current entity pair are masked/marked with control tokens. (For an example, see the docstrings of different encoding strategies, e.g.
TypedEntityMarker
.) Then, for each encoded sentence, the model takes its document embedding and puts the resulting text representation(s) through a linear layer to get the class relation label.The implemented encoding strategies are taken from this paper by Zhou et al.: https://arxiv.org/abs/2102.01373
Warning
Currently, the model has no multi-label support.
- __init__(embeddings, label_dictionary, label_type, entity_label_types, entity_pair_labels=None, entity_threshold=None, cross_augmentation=True, encoding_strategy=<flair.models.relation_classifier_model.TypedEntityMarker object>, zero_tag_value='O', allow_unk_tag=True, **classifierargs)View on GitHub#
Initializes a RelationClassifier.
- Parameters:
embeddings (
DocumentEmbeddings
) – The document embeddings used to embed each sentencelabel_dictionary (
Dictionary
) – A Dictionary containing all predictable labels from the corpuslabel_type (
str
) – The label type which is going to be predicted, in case a corpus has multiple annotationsentity_label_types (
Union
[str
,Sequence
[str
],Dict
[str
,Optional
[Set
[str
]]]]) – A label type or sequence of label types of the required relation entities. You can also specify a label filter in a dictionary with the label type as key and the valid entity labels as values in a set. E.g. to use only ‘PER’ and ‘ORG’ labels from a NER-tagger: {‘ner’: {‘PER’, ‘ORG’}}. To use all labels from ‘ner’, pass ‘ner’.entity_pair_labels (
Optional
[Set
[Tuple
[str
,str
]]]) – A set of valid relation entity pair combinations, used as relation candidates. Specify valid entity pairs in a set of tuples of labels (<HEAD>, <TAIL>). E.g. for the born_in relation, only relations from ‘PER’ to ‘LOC’ make sense. Here, relations from ‘PER’ to ‘PER’ are not meaningful, so it is advised to specify the entity_pair_labels as {(‘PER’, ‘ORG’)}. This setting may help to reduce the number of relation candidates. Leaving this parameter as None (default) disables the relation-candidate-filter, i.e. the model classifies the relation for each entity pair in the cross product of all entity pairs (inefficient).entity_threshold (
Optional
[float
]) – Only pre-labelled entities above this threshold are taken into account by the model.cross_augmentation (
bool
) – If True, use cross augmentation to transform Sentence`s into `EncodedSentenece`s. When cross augmentation is enabled, the transformation functions, e.g. `transform_corpus, generate an encoded sentence for each entity pair in the cross product of all entities in the original sentence. When disabling cross augmentation, the transform functions only generate encoded sentences for each gold relation annotation in the original sentence.encoding_strategy (
EncodingStrategy
) – An instance of a class conforming theEncodingStrategy
protocolzero_tag_value (
str
) – The label to use for out-of-class relationsallow_unk_tag (
bool
) – If False, removes <unk> from the passed label dictionary, otherwise do nothing.classifierargs – The remaining parameters passed to the underlying
flair.models.DefaultClassifier
- _valid_entities(sentence)View on GitHub#
Yields all valid entities, filtered under the specification of
entity_label_types
.- Parameters:
sentence (
Sentence
) – A Sentence object with entity annotations- Yields:
Valid entities as _Entity
- Return type:
Iterator
[_Entity
]
- _entity_pair_permutations(sentence)View on GitHub#
Yields all valid entity pair permutations (relation candidates).
If the passed sentence contains relation annotations, the relation gold label will be yielded along with the participating entities. The permutations are constructed by a filtered cross-product under the specification of :py:meth:~`flair.models.RelationClassifier.entity_label_types` and :py:meth:~`flair.models.RelationClassifier.entity_pair_labels`.
- Parameters:
sentence (
Sentence
) – A Sentence with entity annotations- Yields:
Tuples of (HEAD, TAIL, gold_label) – The head and tail _Entity`s have span references to the passed sentence.
- Return type:
Iterator
[Tuple
[_Entity
,_Entity
,Optional
[str
]]]
- _encode_sentence(head, tail, gold_label=None)View on GitHub#
Returns a new Sentence object with masked/marked head and tail spans according to the encoding strategy.
If provided, the encoded sentence also has the corresponding gold label annotation from
label_type
.- Parameters:
head (
_Entity
) – The head Entitytail (
_Entity
) – The tail Entitygold_label (
Optional
[str
]) – An optional gold label of the induced relation by the head and tail entity
- Return type:
EncodedSentence
Returns: The EncodedSentence with Gold Annotations
- _encode_sentence_for_inference(sentence)View on GitHub#
Create Encoded Sentences and Relation pairs for Inference.
Yields encoded sentences annotated with their gold relation and the corresponding relation object in the original sentence for all valid entity pair permutations. The created encoded sentences are newly created sentences with no reference to the passed sentence.
- Important properties:
Every sentence has exactly one encoded head and tail entity token. Therefore, every encoded sentence has exactly one induced relation annotation, the gold annotation or self.zero_tag_value.
The created relations have head and tail spans from the original passed sentence.
- Parameters:
sentence (
Sentence
) – A flair Sentence object with entity annotations- Return type:
Iterator
[Tuple
[EncodedSentence
,Relation
]]
Returns: Encoded sentences annotated with their gold relation and the corresponding relation in the original sentence
- _encode_sentence_for_training(sentence)View on GitHub#
Create Encoded Sentences and Relation pairs for Training.
Same as self._encode_sentence_for_inference.
with the option of disabling cross augmentation via self.cross_augmentation (and that the relation with reference to the original sentence is not returned).
- Return type:
Iterator
[EncodedSentence
]
- transform_sentence(sentences)View on GitHub#
Transforms sentences into encoded sentences specific to the RelationClassifier.
For more information on the internal sentence transformation procedure, see the
flair.models.RelationClassifier
architecture and the differentflair.models.relation_classifier_model.EncodingStrategy
variants docstrings.
- transform_dataset(dataset)View on GitHub#
Transforms a dataset into a dataset containing encoded sentences specific to the RelationClassifier.
The returned dataset is stored in memory. For more information on the internal sentence transformation procedure, see the
RelationClassifier
architecture and the differentEncodingStrategy
variants docstrings.- Parameters:
dataset (
Dataset
[Sentence
]) – A dataset of sentences to transform- Return type:
FlairDatapointDataset
[EncodedSentence
]
Returns: A dataset of encoded sentences specific to the RelationClassifier
- transform_corpus(corpus)View on GitHub#
Transforms a corpus into a corpus containing encoded sentences specific to the RelationClassifier.
The splits of the returned corpus are stored in memory. For more information on the internal sentence transformation procedure, see the
RelationClassifier
architecture and the differentEncodingStrategy
variants docstrings.- Parameters:
corpus (
Corpus
[Sentence
]) – A corpus of sentences to transform- Return type:
Corpus
[EncodedSentence
]
Returns: A corpus of encoded sentences specific to the RelationClassifier
- _get_data_points_from_sentence(sentence)View on GitHub#
Returns the encoded sentences to which labels are added.
To encode sentences, use the transform function of the RelationClassifier.
- Return type:
List
[EncodedSentence
]
- predict(sentences, mini_batch_size=32, return_probabilities_for_all_classes=False, verbose=False, label_name=None, return_loss=False, embedding_storage_mode='none')View on GitHub#
Predicts the class labels for the given sentence(s).
Standard Sentence objects and EncodedSentences specific to the RelationClassifier are allowed as input. The (relation) labels are directly added to the sentences.
- Parameters:
sentences (
Union
[List
[Sentence
],List
[EncodedSentence
],Sentence
,EncodedSentence
]) – A list of (encoded) sentences.mini_batch_size (
int
) – The mini batch size to usereturn_probabilities_for_all_classes (
bool
) – Return probabilities for all classes instead of only best predictedverbose (
bool
) – Set to display a progress barreturn_loss (
bool
) – Set to return losslabel_name (
Optional
[str
]) – Set to change the predicted label type nameembedding_storage_mode (
str
) – The default is ‘none’, which is always best. Only set to ‘cpu’ or ‘gpu’ if you wish to predict and keep the generated embeddings in CPU or GPU memory, respectively.
- Return type:
Optional
[Tuple
[Tensor
,int
]]
Returns: The loss and the total number of classes, if return_loss is set
- property label_type: str#
Each model predicts labels of a certain type.
- property zero_tag_value: str#
- property allow_unk_tag: bool#
- get_used_tokens(corpus)View on GitHub#
- Return type:
Iterable
[List
[str
]]
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.RelationExtractor(embeddings, label_type, entity_label_type, entity_pair_filters=None, pooling_operation='first_last', train_on_gold_pairs_only=False, **classifierargs)View on GitHub#
Bases:
DefaultClassifier
[Sentence
,Relation
]- __init__(embeddings, label_type, entity_label_type, entity_pair_filters=None, pooling_operation='first_last', train_on_gold_pairs_only=False, **classifierargs)View on GitHub#
Initializes a RelationClassifier.
- Parameters:
embeddings (
TokenEmbeddings
) – embeddings used to embed each data pointlabel_type (
str
) – name of the labelentity_label_type (
str
) – name of the labels used to represent entitiesentity_pair_filters (
Optional
[List
[Tuple
[str
,str
]]]) – if provided, only classify pairs that apply the filterpooling_operation (
str
) – either “first” or “first_last” how the embeddings of the entities should be used to create relation embeddingstrain_on_gold_pairs_only (
bool
) – if True, relations with “O” (no relation) label will be ignored in training.**classifierargs – The arguments propagated to
flair.nn.DefaultClassifier.__init__()
- property label_type#
Each model predicts labels of a certain type.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.RegexpTagger(mapping)View on GitHub#
Bases:
object
- __init__(mapping)View on GitHub#
This tagger is capable of tagging sentence objects with given regexp -> label mappings.
I.e: The tuple (r’([”'])(?:(?=(\?))2.)*?1’, ‘QUOTE’) maps every match of the regexp to a <QUOTE> labeled span and therefore labels the given sentence object with RegexpTagger.predict(). This tagger supports multilabeling so tokens can be included in multiple labeled spans. The regexp are compiled internally and an re.error will be raised if the compilation of a given regexp fails.
If a match violates (in this case overlaps) a token span, an exception is raised.
- Parameters:
mapping (
Union
[List
[Tuple
[str
,str
]],Tuple
[str
,str
]]) – A list of tuples or a single tuple representing a mapping as regexp -> label
- property registered_labels#
- register_labels(mapping)View on GitHub#
Register a regexp -> label mapping.
- Parameters:
mapping (
Union
[List
[Tuple
[str
,str
]],Tuple
[str
,str
]]) – A list of tuples or a single tuple representing a mapping as regexp -> label
- remove_labels(labels)View on GitHub#
Remove a registered regexp -> label mapping given by label.
- Parameters:
labels (
Union
[List
[str
],str
]) – A list of labels or a single label as strings.
- predict(sentences)View on GitHub#
Predict the given sentences according to the registered mappings.
- Return type:
List
[Sentence
]
- _label(sentence)View on GitHub#
This will add a complex_label to the given sentence for every match.span() for every registered_mapping.
If a match span overlaps with a token span an exception is raised.
- class flair.models.SequenceTagger(embeddings, tag_dictionary, tag_type, use_rnn=True, rnn=None, rnn_type='LSTM', tag_format='BIOES', hidden_size=256, rnn_layers=1, bidirectional=True, use_crf=True, reproject_embeddings=True, dropout=0.0, word_dropout=0.05, locked_dropout=0.5, train_initial_hidden_state=False, loss_weights=None, init_from_state_dict=False, allow_unk_predictions=False)View on GitHub#
Bases:
Classifier
[Sentence
]- __init__(embeddings, tag_dictionary, tag_type, use_rnn=True, rnn=None, rnn_type='LSTM', tag_format='BIOES', hidden_size=256, rnn_layers=1, bidirectional=True, use_crf=True, reproject_embeddings=True, dropout=0.0, word_dropout=0.05, locked_dropout=0.5, train_initial_hidden_state=False, loss_weights=None, init_from_state_dict=False, allow_unk_predictions=False)View on GitHub#
Sequence Tagger class for predicting labels for single tokens. Can be parameterized by several attributes.
In case of multitask learning, pass shared embeddings or shared rnn into respective attributes.
- Parameters:
embeddings (
TokenEmbeddings
) – Embeddings to use during training and predictiontag_dictionary (
Dictionary
) – Dictionary containing all tags from corpus which can be predictedtag_type (
str
) – type of tag which is going to be predicted in case a corpus has multiple annotationsuse_rnn (
bool
) – If true, use a RNN, else Linear layer.rnn (
Optional
[RNN
]) – Takes a torch.nn.Module as parameter by which you can pass a shared RNN between different tasks.rnn_type (
str
) – Specifies the RNN type to use, default is ‘LSTM’, can choose between ‘GRU’ and ‘RNN’ as well.hidden_size (
int
) – Hidden size of RNN layerrnn_layers (
int
) – number of RNN layersbidirectional (
bool
) – If True, RNN becomes bidirectionaluse_crf (
bool
) – If True, use a Conditional Random Field for prediction, else linear map to tag space.reproject_embeddings (
bool
) – If True, add a linear layer on top of embeddings, if you want to imitate fine tune non-trainable embeddings.dropout (
float
) – If > 0, then use dropout.word_dropout (
float
) – If > 0, then use word dropout.locked_dropout (
float
) – If > 0, then use locked dropout.train_initial_hidden_state (
bool
) – if True, trains initial hidden state of RNNloss_weights (
Optional
[Dict
[str
,float
]]) – Dictionary of weights for labels for the loss function. If any label’s weight is unspecified it will default to 1.0.init_from_state_dict (
bool
) – Indicator whether we are loading a model from state dict since we need to transform previous models’ weights into CRF instance weightsallow_unk_predictions (
bool
) – If True, allows spans to predict <unk> too.tag_format (
str
) – the format to encode spans as tags, either “BIO” or “BIOES”
- property label_type#
Each model predicts labels of a certain type.
- _init_loss_weights(loss_weights)View on GitHub#
Initializes the loss weights based on given dictionary.
- Parameters:
loss_weights (
Dict
[str
,float
]) – dictionary - contains loss weights- Return type:
Tensor
Initializes hidden states given the number of directions in RNN.
- Parameters:
num_directions (
int
) – Number of directions in RNN.
- static RNN(rnn_type, rnn_layers, hidden_size, bidirectional, rnn_input_dim)View on GitHub#
Static wrapper function returning an RNN instance from PyTorch.
- Parameters:
rnn_type (
str
) – Type of RNN from torch.nnrnn_layers (
int
) – number of layers to includehidden_size (
int
) – hidden size of RNN cellbidirectional (
bool
) – If True, RNN cell is bidirectionalrnn_input_dim (
int
) – Input dimension to RNN cell
- Return type:
RNN
- forward_loss(sentences)View on GitHub#
Performs a forward pass and returns a loss tensor for backpropagation.
Implement this to enable training.
- Return type:
Tuple
[Tensor
,int
]
- forward(sentence_tensor, lengths)View on GitHub#
Forward propagation through network.
- Parameters:
sentence_tensor (
Tensor
) – A tensor representing the batch of sentences.lengths (
LongTensor
) – A IntTensor representing the lengths of the respective sentences.
- static _get_scores_from_features(features, lengths)View on GitHub#
Remove paddings to get a smaller tensor.
Trims current batch tensor in shape (batch size, sequence length, tagset size) in such a way that all pads are going to be removed.
- Parameters:
features (
Tensor
) – all features from forward propagationlengths (
Tensor
) – length from each sentence in batch in order to trim padding tokens
- _get_gold_labels(sentences)View on GitHub#
Extracts gold labels from each sentence.
- Parameters:
sentences (
List
[Sentence
]) – List of sentences in batch- Return type:
List
[str
]
- predict(sentences, mini_batch_size=32, return_probabilities_for_all_classes=False, verbose=False, label_name=None, return_loss=False, embedding_storage_mode='none', force_token_predictions=False)View on GitHub#
Predicts labels for current batch with CRF or Softmax.
- Parameters:
sentences (
Union
[List
[Sentence
],Sentence
]) – List of sentences in batchmini_batch_size (
int
) – batch size for test datareturn_probabilities_for_all_classes (
bool
) – Whether to return probabilities for all classesverbose (
bool
) – whether to use progress barlabel_name (
Optional
[str
]) – which label to predictreturn_loss – whether to return loss value
embedding_storage_mode – determines where to store embeddings - can be “gpu”, “cpu” or None.
force_token_predictions (
bool
) – add labels per token instead of span labels, even if self.predict_spans is True
- _standard_inference(features, batch, probabilities_for_all_classes)View on GitHub#
Softmax over emission scores from forward propagation.
- Parameters:
features (
Tensor
) – sentence tensor from forward propagationbatch (
List
[Sentence
]) – sentencesprobabilities_for_all_classes (
bool
) – whether to return score for each tag in tag dictionary
- _all_scores_for_token(sentences, scores, lengths)View on GitHub#
Returns all scores for each tag in tag dictionary.
- _get_state_dict()View on GitHub#
Returns the state dictionary for this model.
- push_to_hub(repo_id, token=None, private=None, commit_message='Add new SequenceTagger model.')View on GitHub#
Uploads the Sequence Tagger model to a Hugging Face Hub repository.
- Parameters:
repo_id (
str
) – A namespace (user or an organization) and a repo name separated by a /.token (
Optional
[str
]) – An authentication token (See https://huggingface.co/settings/token).private (
Optional
[bool
]) – Whether the repository is private.commit_message (
str
) – Message to commit while pushing.
Returns: The url of the repository.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.TokenClassifier(embeddings, label_dictionary, label_type, span_encoding='BIOES', **classifierargs)View on GitHub#
Bases:
DefaultClassifier
[Sentence
,Token
]This is a simple class of models that tags individual words in text.
- __init__(embeddings, label_dictionary, label_type, span_encoding='BIOES', **classifierargs)View on GitHub#
Initializes a TokenClassifier.
- Parameters:
embeddings (
TokenEmbeddings
) – word embeddings used in taggerlabel_dictionary (
Dictionary
) – dictionary of labels or BIO/BIOES tags you want to predictlabel_type (
str
) – string identifier for tag typespan_encoding (
str
) – the format to encode spans as tags, either “BIO” or “BIOES”**classifierargs – The arguments propagated to
flair.nn.DefaultClassifier.__init__()
- property label_type#
Each model predicts labels of a certain type.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.WordTagger(embeddings, label_dictionary, label_type, span_encoding='BIOES', **classifierargs)View on GitHub#
Bases:
TokenClassifier
Deprecated since version 0.12.2: The WordTagger was renamed to
flair.models.TokenClassifier
.
- class flair.models.FewshotClassifierView on GitHub#
Bases:
Classifier
[Sentence
],ABC
- forward_loss(data_points)View on GitHub#
Performs a forward pass and returns a loss tensor for backpropagation.
Implement this to enable training.
- Return type:
Tuple
[Tensor
,int
]
- property tars_embeddings#
- train(mode=True)View on GitHub#
Populate label similarity map based on cosine similarity before running epoch.
If the num_negative_labels_to_sample is set to an integer value then before starting each epoch the model would create a similarity measure between the label names based on cosine distances between their BERT encoded embeddings.
- _compute_label_similarity_for_current_epoch()View on GitHub#
Compute the similarity between all labels for better sampling of negatives.
- get_current_label_dictionary()View on GitHub#
- get_current_label_type()View on GitHub#
- is_current_task_multi_label()View on GitHub#
- add_and_switch_to_new_task(task_name, label_dictionary, label_type, multi_label=True, force_switch=False)View on GitHub#
Adds a new task to an existing TARS model.
Sets necessary attributes and finally ‘switches’ to the new task. Parameters are similar to the constructor except for model choice, batch size and negative sampling. This method does not store the resultant model onto disk.
- Parameters:
task_name (
str
) – a string depicting the name of the tasklabel_dictionary (
Union
[List
,Set
,Dictionary
,str
]) – dictionary of the labels you want to predictlabel_type (
str
) – string to identify the label type (‘ner’, ‘sentiment’, etc.)multi_label (
bool
) – whether this task is a multi-label prediction problemforce_switch (
bool
) – if True, will overwrite existing task with same name
- list_existing_tasks()View on GitHub#
Lists existing tasks in the loaded TARS model on the console.
- Return type:
Set
[str
]
- switch_to_task(task_name)View on GitHub#
Switches to a task which was previously added.
- property label_type#
Each model predicts labels of a certain type.
- predict_zero_shot(sentences, candidate_label_set, multi_label=True)View on GitHub#
Make zero shot predictions from the TARS model.
- get_used_tokens(corpus)View on GitHub#
- Return type:
Iterable
[List
[str
]]
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.TARSClassifier(task_name=None, label_dictionary=None, label_type=None, embeddings='bert-base-uncased', num_negative_labels_to_sample=2, prefix=True, **tagger_args)View on GitHub#
Bases:
FewshotClassifier
TARS model for text classification.
In the backend, the model uses a BERT based binary text classifier which given a <label, text> pair predicts the probability of two classes “True”, and “False”. The input data is a usual Sentence object which is inflated by the model internally before pushing it through the transformer stack of BERT.
- static_label_type = 'tars_label'#
- LABEL_MATCH = 'YES'#
- LABEL_NO_MATCH = 'NO'#
- __init__(task_name=None, label_dictionary=None, label_type=None, embeddings='bert-base-uncased', num_negative_labels_to_sample=2, prefix=True, **tagger_args)View on GitHub#
Initializes a TarsClassifier.
- Parameters:
task_name (
Optional
[str
]) – a string depicting the name of the task.label_dictionary (
Optional
[Dictionary
]) – dictionary of labels you want to predict.label_type (
Optional
[str
]) – label_type: name of the labelembeddings (
Union
[TransformerDocumentEmbeddings
,str
]) – name of the pre-trained transformer model e.g., ‘bert-base-uncased’.num_negative_labels_to_sample (
Optional
[int
]) – number of negative labels to sample for each positive labels against a sentence during training. Defaults to 2 negative labels for each positive label. The model would sample all the negative labels if None is passed. That slows down the training considerably.multi_label – auto-detected by default, but you can set this to True to force multi-label predictions or False to force single-label predictions.
multi_label_threshold – If multi-label you can set the threshold to make predictions.
beta – Parameter for F-beta score for evaluation and training annealing.
prefix (
bool
) – if True, the label will be concatenated at the start, else on the end.**tagger_args – The arguments propagated to
FewshotClassifier.__init__()
- property tars_embeddings#
- predict(sentences, mini_batch_size=32, return_probabilities_for_all_classes=False, verbose=False, label_name=None, return_loss=False, embedding_storage_mode='none', label_threshold=0.5, multi_label=None, force_label=False)View on GitHub#
Predict sentences on the Text Classification task.
- Parameters:
return_probabilities_for_all_classes (
bool
) – if True, all classes will be added with their respective confidences.sentences (
Union
[List
[Sentence
],Sentence
]) – a Sentence or a List of Sentenceforce_label (
bool
) – when multilabel is active, you can force to always get at least one prediction.multi_label (
Optional
[bool
]) – if True multiple labels can be predicted. Defaults to the setting of the configured task.label_threshold (
float
) – when multi_label, specify the threshold when a class is considered as predicted.mini_batch_size – size of the minibatch, usually bigger is more rapid but consume more memory, up to a point when it has no more effect.
all_tag_prob – True to compute the score for each tag on each token, otherwise only the score of the best tag is returned
verbose (
bool
) – set to True to display a progress barreturn_loss – set to True to also compute the loss
label_name (
Optional
[str
]) – set this to change the name of the label type that is predictedembedding_storage_mode – default is ‘none’ which doesn’t store the embeddings in RAM. Only set to ‘cpu’ or ‘gpu’ if you wish to not only predict, but also keep the generated embeddings in CPU or GPU memory respectively. ‘gpu’ to store embeddings in GPU memory.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.TARSTagger(task_name=None, label_dictionary=None, label_type=None, embeddings='bert-base-uncased', num_negative_labels_to_sample=2, prefix=True, **tagger_args)View on GitHub#
Bases:
FewshotClassifier
TARS model for sequence tagging.
In the backend, the model uses a BERT based 5-class sequence labeler which given a <label, text> pair predicts the probability for each word to belong to one of the BIOES classes. The input data is a usual Sentence object which is inflated by the model internally before pushing it through the transformer stack of BERT.
- static_label_type = 'tars_label'#
- __init__(task_name=None, label_dictionary=None, label_type=None, embeddings='bert-base-uncased', num_negative_labels_to_sample=2, prefix=True, **tagger_args)View on GitHub#
Initializes a TarsTagger.
- Parameters:
task_name (
Optional
[str
]) – a string depicting the name of the tasklabel_dictionary (
Optional
[Dictionary
]) – dictionary of labels you want to predictlabel_type (
Optional
[str
]) – label_type: name of the labelembeddings (
Union
[TransformerWordEmbeddings
,str
]) – name of the pre-trained transformer model e.g., ‘bert-base-uncased’num_negative_labels_to_sample (
Optional
[int
]) – number of negative labels to sample for each positive labels against a sentence during training. Defaults to 2 negative labels for each positive label. The model would sample all the negative labels if None is passed. That slows down the training considerably.prefix (
bool
) – if True, the label will be concatenated at the start, else on the end.**tagger_args – The arguments propagated to
FewshotClassifier.__init__()
- property tars_embeddings#
- predict(sentences, mini_batch_size=32, return_probabilities_for_all_classes=False, verbose=False, label_name=None, return_loss=False, embedding_storage_mode='none', most_probable_first=True)View on GitHub#
Predict sequence tags for Named Entity Recognition task.
- Parameters:
sentences (
Union
[List
[Sentence
],Sentence
]) – a Sentence or a List of Sentencemini_batch_size – size of the minibatch, usually bigger is more rapid but consume more memory, up to a point when it has no more effect.
all_tag_prob – True to compute the score for each tag on each token, otherwise only the score of the best tag is returned
verbose (
bool
) – set to True to display a progress barreturn_loss – set to True to also compute the loss
label_name (
Optional
[str
]) – set this to change the name of the label type that is predictedembedding_storage_mode – default is ‘none’ which doesn’t store the embeddings in RAM. Only set to ‘cpu’ or ‘gpu’ if you wish to not only predict, but also keep the generated embeddings in CPU or GPU memory respectively. ‘gpu’ to store embeddings in GPU memory.
return_probabilities_for_all_classes (
bool
) – if True, all classes will be added with their respective confidences.most_probable_first (
bool
) – if True, nested predictions will be removed, if False all predictions will be returned, including overlaps
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.TextClassifier(embeddings, label_type, **classifierargs)View on GitHub#
Bases:
DefaultClassifier
[Sentence
,Sentence
]Text Classification Model.
The model takes word embeddings, puts them into an RNN to obtain a text representation, and puts the text representation in the end into a linear layer to get the actual class label. The model can handle single and multi class data sets.
- __init__(embeddings, label_type, **classifierargs)View on GitHub#
Initializes a TextClassifier.
- Parameters:
embeddings (
DocumentEmbeddings
) – embeddings used to embed each data pointlabel_dictionary – dictionary of labels you want to predict
label_type (
str
) – string identifier for tag typemulti_label – auto-detected by default, but you can set this to True to force multi-label predictions or False to force single-label predictions.
multi_label_threshold – If multi-label you can set the threshold to make predictions
beta – Parameter for F-beta score for evaluation and training annealing
loss_weights – Dictionary of weights for labels for the loss function. If any label’s weight is unspecified it will default to 1.0
**classifierargs – The arguments propagated to
flair.nn.DefaultClassifier.__init__()
- property label_type#
Each model predicts labels of a certain type.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- class flair.models.TextRegressor(document_embeddings, label_name='label')View on GitHub#
Bases:
Model
[Sentence
],ReduceTransformerVocabMixin
- property label_type#
Each model predicts labels of a certain type.
- forward(*args)View on GitHub#
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tensor
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- forward_loss(sentences)View on GitHub#
Performs a forward pass and returns a loss tensor for backpropagation.
Implement this to enable training.
- Return type:
Tuple
[Tensor
,int
]
- predict(sentences, mini_batch_size=32, verbose=False, label_name=None, embedding_storage_mode='none')View on GitHub#
- Return type:
List
[Sentence
]
- forward_labels_and_loss(sentences)View on GitHub#
- Return type:
Tuple
[Tensor
,Tensor
]
- evaluate(data_points, gold_label_type, out_path=None, embedding_storage_mode='none', mini_batch_size=32, main_evaluation_metric=('micro avg', 'f1-score'), exclude_labels=[], gold_label_dictionary=None, return_loss=True, **kwargs)View on GitHub#
Evaluates the model. Returns a Result object containing evaluation results and a loss value.
Implement this to enable evaluation.
- Parameters:
data_points (
Union
[List
[Sentence
],Dataset
]) – The labeled data_points to evaluate.gold_label_type (
str
) – The label type indicating the gold labelsout_path (
Union
[str
,Path
,None
]) – Optional output path to store predictionsembedding_storage_mode (
str
) – One of ‘none’, ‘cpu’ or ‘gpu’. ‘none’ means all embeddings are deleted and freshly recomputed, ‘cpu’ means all embeddings are stored on CPU, or ‘gpu’ means all embeddings are stored on GPUmini_batch_size (
int
) – The batch_size to use for predictionsmain_evaluation_metric (
Tuple
[str
,str
]) – Specify which metric to highlight as main_scoreexclude_labels (
List
[str
]) – Specify classes that won’t be considered in evaluationgold_label_dictionary (
Optional
[Dictionary
]) – Specify which classes should be considered, all other classes will be taken as <unk>.return_loss (
bool
) – Weather to additionally compute the loss on the data-points.**kwargs – Arguments that will be ignored.
- Return type:
Result
- Returns:
The evaluation results.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model
- get_used_tokens(corpus)View on GitHub#
- Return type:
Iterable
[List
[str
]]
- class flair.models.ClusteringModel(model, embeddings)View on GitHub#
Bases:
object
A wrapper class to apply sklearn clustering models on DocumentEmbeddings.
- __init__(model, embeddings)View on GitHub#
Instantiate the ClusteringModel.
- Parameters:
model (
Union
[ClusterMixin
,BaseEstimator
]) – the clustering algorithm from sklearn this wrapper will use.embeddings (
DocumentEmbeddings
) – the flair DocumentEmbedding this wrapper uses to calculate a vector for each sentence.
- fit(corpus, **kwargs)View on GitHub#
Trains the model.
- Parameters:
corpus (
Corpus
) – the flair corpus this wrapper will use for fitting the model.**kwargs – parameters propagated to the models .fit() method.
- predict(corpus)View on GitHub#
Predict labels given a list of sentences and returns the respective class indices.
- Parameters:
corpus (
Corpus
) – the flair corpus this wrapper will use for predicting the labels.
- save(model_file)View on GitHub#
Saves current model.
- Parameters:
model_file (
Union
[str
,Path
]) – path where to save the model.
- static load(model_file)View on GitHub#
Loads a model from a given path.
- Parameters:
model_file (
Union
[str
,Path
]) – path to the file where the model is saved.
- _convert_dataset(corpus, label_type=None, batch_size=32, return_label_dict=False)View on GitHub#
Makes a flair-corpus sklearn compatible.
Turns the corpora into X, y datasets as required for most sklearn clustering models. Ref.: https://scikit-learn.org/stable/modules/classes.html#module-sklearn.cluster
- evaluate(corpus, label_type)View on GitHub#
This method calculates some evaluation metrics for the clustering.
Also, the result of the evaluation is logged.
- Parameters:
corpus (
Corpus
) – the flair corpus this wrapper will use for evaluation.label_type (
str
) – the label from the sentence will be used for the evaluation.
- class flair.models.MultitaskModel(models, task_ids=None, loss_factors=None, use_all_tasks=False)View on GitHub#
Bases:
Classifier
Multitask Model class which acts as wrapper for creating custom multitask models.
Takes different tasks as input, parameter sharing is done by objects in flair, i.e. creating a Embedding Layer and passing it to two different Models, will result in a hard parameter-shared embedding layer. The abstract class takes care of calling the correct forward propagation and loss function of the respective model.
- __init__(models, task_ids=None, loss_factors=None, use_all_tasks=False)View on GitHub#
Instantiates the MultiTaskModel.
- Parameters:
models (
List
[Classifier
]) – The child models used during multitask training.task_ids (
Optional
[List
[str
]]) – If given, add each corresponding model a specified task id. Otherwise, tasks get the ids ‘Task_0’, ‘Task_1’, …loss_factors (
Optional
[List
[float
]]) – If given, weight the losses of teh corresponding models during training.use_all_tasks (
bool
) – If True, each sentence will be trained on all tasks parallel, otherwise each epoch 1 task will be sampled to train the sentence on.
- forward(*args)View on GitHub#
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tensor
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- forward_loss(sentences)View on GitHub#
Calls the respective forward loss of each model and sums them weighted by their loss factors.
- Parameters:
sentences (
Union
[List
[Sentence
],Sentence
]) – batch of sentences- Return type:
Tuple
[Tensor
,int
]
Returns: loss and sample count
- predict(sentences, **predictargs)View on GitHub#
Predicts the class labels for the given sentences.
The labels are directly added to the sentences.
- Parameters:
sentences – list of sentences
mini_batch_size – mini batch size to use
return_probabilities_for_all_classes – return probabilities for all classes instead of only best predicted
verbose – set to True to display a progress bar
return_loss – set to True to return loss
label_name – set this to change the name of the label type that is predicted # noqa: E501
embedding_storage_mode – default is ‘none’ which is always best. Only set to ‘cpu’ or ‘gpu’ if you wish to not only predict, but also keep the generated embeddings in CPU or GPU memory respectively. ‘gpu’ to store embeddings in GPU memory. # noqa: E501
- static split_batch_to_task_ids(sentences, all_tasks=False)View on GitHub#
Splits a batch of sentences to its respective model.
If single sentence is assigned to several tasks (i.e. same corpus but different tasks), then the model assignment for this batch is randomly chosen.
- Parameters:
- Return type:
Dict
Returns: Key-value pairs as (task_id, list of sentences ids in batch)
- evaluate(data_points, gold_label_type, out_path=None, main_evaluation_metric=('micro avg', 'f1-score'), evaluate_all=True, **evalargs)View on GitHub#
Evaluates the model. Returns a Result object containing evaluation results and a loss value.
- Parameters:
data_points – batch of sentences
gold_label_type (
str
) – if evaluate_all is False, specify the task to evaluate by the task_id.out_path (
Union
[str
,Path
,None
]) – if not None, predictions will be created and saved at the respective file.main_evaluation_metric (
Tuple
[str
,str
]) – Specify which metric to highlight as main_scoreevaluate_all (
bool
) – choose if all tasks should be evaluated, or a single one, depending on gold_label_type**evalargs – arguments propagated to
flair.nn.Model.evaluate()
- Return type:
Result
Returns: Tuple of Result object and loss value (float)
- get_used_tokens(corpus)View on GitHub#
- Return type:
Iterable
[List
[str
]]
- _get_state_dict()View on GitHub#
Returns the state dict of the multitask model which has multiple models underneath.
- classmethod _init_model_with_state_dict(state, **kwargs)View on GitHub#
Initializes the model based on given state dict.
- property label_type#
Each model predicts labels of a certain type.
- classmethod load(model_path)View on GitHub#
Loads the model from the given file.
- Parameters:
model_path (
Union
[str
,Path
,Dict
[str
,Any
]]) – the model file or the already loaded state dict- Return type:
Returns: the loaded text classifier model