Shortcuts

QuestionAnsweringTask

class flash.text.question_answering.model.QuestionAnsweringTask(backbone='sshleifer/tiny-distilbert-base-cased-distilled-squad', max_source_length=384, max_target_length=30, padding='max_length', doc_stride=128, loss_fn=None, optimizer='Adam', lr_scheduler=None, metrics=None, learning_rate=None, enable_ort=False, n_best_size=20, version_2_with_negative=True, null_score_diff_threshold=0.0, use_stemmer=True)[source]

The QuestionAnsweringTask is a Task for extractive question answering. For more details, see question_answering.

You can change the backbone to any question answering model from HuggingFace/transformers using the backbone argument.

Note

When changing the backbone, make sure you pass in the same backbone to the Task and the DataModule object! Since this is a QuestionAnswering task, make sure you use a QuestionAnswering model.

Parameters
  • backbone (str) – backbone model to use for the task.

  • max_source_length (int) – Max length of the sequence to be considered during tokenization.

  • max_target_length (int) – Max length of each answer to be produced.

  • padding (Union[str, bool]) – Padding type during tokenization.

  • doc_stride (int) – The stride amount to be taken when splitting up a long document into chunks.

  • loss_fn (Union[Callable, Mapping, Sequence, None]) – Loss function for training.

  • optimizer (TypeVar(OPTIMIZER_TYPE, str, Callable, Tuple[str, Dict[str, Any]], None)) – Optimizer to use for training.

  • lr_scheduler (Optional[TypeVar(LR_SCHEDULER_TYPE, str, Callable, Tuple[str, Dict[str, Any]], Tuple[str, Dict[str, Any], Dict[str, Any]], None)]) – The LR scheduler to use during training.

  • metrics (Optional[TypeVar(METRICS_TYPE, Metric, Mapping, Sequence, None)]) – Metrics to compute for training and evaluation. Defauls to calculating the ROUGE metric. Changing this argument currently has no effect.

  • learning_rate (Optional[float]) – Learning rate to use for training, defaults to 3e-4

  • enable_ort (bool) – Enable Torch ONNX Runtime Optimization: https://onnxruntime.ai/docs/#onnx-runtime-for-training

  • n_best_size (int) – The total number of n-best predictions to generate when looking for an answer.

  • version_2_with_negative (bool) – If true, some of the examples do not have an answer.

  • max_answer_length – The maximum length of an answer that can be generated. This is needed because the start and end predictions are not conditioned on one another.

  • null_score_diff_threshold (float) – The threshold used to select the null answer: if the best answer has a score that is less than the score of the null answer minus this threshold, the null answer is selected for this example. Only useful when version_2_with_negative=True.

  • use_stemmer (bool) – Whether Porter stemmer should be used to strip word suffixes to improve matching.

classmethod available_finetuning_strategies(cls)

Returns a list containing the keys of the available Finetuning Strategies.

Return type

List[str]

classmethod available_lr_schedulers(cls)

Returns a list containing the keys of the available LR schedulers.

Return type

List[str]

classmethod available_optimizers(cls)

Returns a list containing the keys of the available Optimizers.

Return type

List[str]

classmethod available_outputs(cls)

Returns the list of available outputs (that can be used during prediction or serving) for this Task.

Examples

..testsetup:

>>> from flash import Task
>>> print(Task.available_outputs())
['preds', 'raw']
Return type

List[str]

modules_to_freeze()[source]

Return the module attributes of the model to be frozen.

Return type

Union[Module, Iterable[Union[Module, Iterable]]]

property task: Optional[str]

Override to define AutoConfig task specific parameters stored within the model.

Return type

Optional[str]

Read the Docs v: 0.8.0
Versions
latest
stable
0.8.0
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.2
0.5.1
0.5.0
0.4.0
0.3.2
0.3.1
0.3.0
0.2.3
0.2.2
0.2.1
0.2.0
0.1.0post1
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.