flair.trainers.plugins.LinearSchedulerPlugin#

class flair.trainers.plugins.LinearSchedulerPlugin(warmup_fraction)View on GitHub#

Bases: TrainerPlugin

Plugin for LinearSchedulerWithWarmup.

__init__(warmup_fraction)View on GitHub#

Initialize the base plugin.

Methods

__init__(warmup_fraction)

Initialize the base plugin.

after_setup(dataset_size, mini_batch_size, ...)

Initialize different schedulers, including anneal target for AnnealOnPlateau, batch_growth_annealing, loading schedulers.

after_training_batch(optimizer_was_run, **kwargs)

Do the scheduler step if one-cycle or linear decay.

attach_to(pluggable)

Attach this plugin to a Pluggable.

before_training_epoch(**kwargs)

Load state for anneal_with_restarts, batch_growth_annealing, logic for early stopping.

detach()

Detach a plugin from the Pluggable it is attached to.

get_state()

hook([first_arg])

Convience function for BasePlugin.mark_func_as_hook).

mark_func_as_hook(func, *events)

Mark method as a hook triggered by the Pluggable.

store_learning_rate()

Attributes

attach_to_all_processes

If set, the plugin will be attached to all processes when distributed, not just the main process.

corpus

model

pluggable

trainer

store_learning_rate()View on GitHub#
after_setup(dataset_size, mini_batch_size, max_epochs, **kwargs)View on GitHub#

Initialize different schedulers, including anneal target for AnnealOnPlateau, batch_growth_annealing, loading schedulers.

before_training_epoch(**kwargs)View on GitHub#

Load state for anneal_with_restarts, batch_growth_annealing, logic for early stopping.

after_training_batch(optimizer_was_run, **kwargs)View on GitHub#

Do the scheduler step if one-cycle or linear decay.

get_state()View on GitHub#
Return type:

dict[str, Any]