flair.datasets.entity_linking.NEL_ENGLISH_IITB#

class flair.datasets.entity_linking.NEL_ENGLISH_IITB(base_path=None, in_memory=True, ignore_disagreements=False, sentence_splitter=<flair.splitter.SegtokSentenceSplitter object>, **corpusargs)View on GitHub#

Bases: ColumnCorpus

__init__(base_path=None, in_memory=True, ignore_disagreements=False, sentence_splitter=<flair.splitter.SegtokSentenceSplitter object>, **corpusargs)View on GitHub#

Initialize ITTB Entity Linking corpus.

The corpus got introduced in “Collective Annotation of Wikipedia Entities in Web Text” Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti.

If you call the constructor the first time the dataset gets automatically downloaded.

Parameters:
  • base_path (Union[str, Path], optional) – Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this to point to a different folder but typically this should not be necessary.

  • in_memory (bool) – If True, keeps dataset in memory giving speedups in training.

  • ignore_disagreements (bool) – If True annotations with annotator disagreement will be ignored.

  • sentence_splitter (SentenceSplitter) – The sentencesplitter that is used to split the articles into sentences.

Methods

__init__([base_path, in_memory, ...])

Initialize ITTB Entity Linking corpus.

add_label_noise(label_type, labels[, ...])

Generates uniform label noise distribution in the chosen dataset split.

downsample([percentage, downsample_train, ...])

Randomly downsample the corpus to the given percentage (by removing data points).

filter_empty_sentences()

A method that filters all sentences consisting of 0 tokens.

filter_long_sentences(max_charlength)

A method that filters all sentences for which the plain text is longer than a specified number of characters.

get_all_sentences()

Returns all sentences (spanning all three splits) in the Corpus.

get_label_distribution()

Counts occurrences of each label in the corpus and returns them as a dictionary object.

make_label_dictionary(label_type[, ...])

Creates a dictionary of all labels assigned to the sentences in the corpus.

make_tag_dictionary(tag_type)

Create a tag dictionary of a given label type.

make_vocab_dictionary([max_tokens, min_freq])

Creates a Dictionary of all tokens contained in the corpus.

obtain_statistics([label_type, pretty_print])

Print statistics about the corpus, including the length of the sentences and the labels in the corpus.

Attributes

dev

The dev split as a torch.utils.data.Dataset object.

test

The test split as a torch.utils.data.Dataset object.

train

The training split as a torch.utils.data.Dataset object.