flair.datasets.text_text.DataTripleCorpus#

class flair.datasets.text_text.DataTripleCorpus(data_folder, columns=[0, 1, 2, 3], train_file=None, test_file=None, dev_file=None, use_tokenizer=True, max_tokens_per_doc=-1, max_chars_per_doc=-1, in_memory=True, label_type=None, autofind_splits=True, sample_missing_splits=True, skip_first_line=False, separator='\\t', encoding='utf-8')View on GitHub#

Bases: Corpus

__init__(data_folder, columns=[0, 1, 2, 3], train_file=None, test_file=None, dev_file=None, use_tokenizer=True, max_tokens_per_doc=-1, max_chars_per_doc=-1, in_memory=True, label_type=None, autofind_splits=True, sample_missing_splits=True, skip_first_line=False, separator='\\t', encoding='utf-8')View on GitHub#

Corpus for tasks involving triples of sentences or paragraphs.

The data files are expected to be in column format where each line has a column for the first sentence/paragraph, the second sentence/paragraph, the third sentence/paragraph and the labels, respectively. The columns must be separated by a given separator (default: ‘t’).

Parameters:
  • data_folder (Union[str, Path]) – base folder with the task data

  • columns (list[int]) – List that indicates the columns for the first sentence (first entry in the list), the second sentence (second entry), the third sentence (third entry), and label (last entry). default = [0,1,2,3]

  • train_file – the name of the train file

  • test_file – the name of the test file, if None, dev data is sampled from train (if sample_missing_splits is true)

  • dev_file – the name of the dev file, if None, dev data is sampled from train (if sample_missing_splits is true)

  • use_tokenizer (bool) – Whether or not to use in-built tokenizer

  • max_tokens_per_doc – If set, shortens sentences to this maximum number of tokens

  • max_chars_per_doc – If set, shortens sentences to this maximum number of characters

  • in_memory (bool) – If True, data will be saved in list of flair.data.DataTriple objects, otherwise we use lists with simple strings which need less space

  • label_type (Optional[str]) – Name of the label of the data triples

  • autofind_splits – If True, train/test/dev files will be automatically identified in the given data_folder

  • sample_missing_splits (bool) – If True, a missing train/test/dev file will be sampled from the available data

  • skip_first_line (bool) – If True, the first line of data files will be ignored

  • separator (str) – Separator between columns in data files

  • encoding (str) – Encoding of data files

Returns:

a Corpus with annotated train, dev, and test data

Methods

__init__(data_folder[, columns, train_file, ...])

Corpus for tasks involving triples of sentences or paragraphs.

add_label_noise(label_type, labels[, ...])

Generates uniform label noise distribution in the chosen dataset split.

downsample([percentage, downsample_train, ...])

Randomly downsample the corpus to the given percentage (by removing data points).

filter_empty_sentences()

A method that filters all sentences consisting of 0 tokens.

filter_long_sentences(max_charlength)

A method that filters all sentences for which the plain text is longer than a specified number of characters.

get_all_sentences()

Returns all sentences (spanning all three splits) in the Corpus.

get_label_distribution()

Counts occurrences of each label in the corpus and returns them as a dictionary object.

make_label_dictionary(label_type[, ...])

Creates a dictionary of all labels assigned to the sentences in the corpus.

make_tag_dictionary(tag_type)

Create a tag dictionary of a given label type.

make_vocab_dictionary([max_tokens, min_freq])

Creates a Dictionary of all tokens contained in the corpus.

obtain_statistics([label_type, pretty_print])

Print statistics about the corpus, including the length of the sentences and the labels in the corpus.

Attributes

dev

The dev split as a torch.utils.data.Dataset object.

test

The test split as a torch.utils.data.Dataset object.

train

The training split as a torch.utils.data.Dataset object.