Task¶
- class flash.core.model.Task(model=None, loss_fn=None, learning_rate=None, optimizer='Adam', lr_scheduler=None, metrics=None, output_transform=None)[source]¶
A general Task.
- Parameters
model¶ (
Optional
[TypeVar
(MODEL_TYPE
,Module
,None
)]) – Model to use for the task.loss_fn¶ (
Optional
[TypeVar
(LOSS_FN_TYPE
,Callable
,Mapping
,Sequence
,None
)]) – Loss function for training.learning_rate¶ (
Optional
[float
]) – Learning rate to use for training. IfNone
(the default) then the default LR for your chosen optimizer will be used.optimizer¶ (
TypeVar
(OPTIMIZER_TYPE
,str
,Callable
,Tuple
[str
,Dict
[str
,Any
]],None
)) – Optimizer to use for training.lr_scheduler¶ (
Optional
[TypeVar
(LR_SCHEDULER_TYPE
,str
,Callable
,Tuple
[str
,Dict
[str
,Any
]],Tuple
[str
,Dict
[str
,Any
],Dict
[str
,Any
]],None
)]) – The LR scheduler to use during training.metrics¶ (
Optional
[TypeVar
(METRICS_TYPE
,Metric
,Mapping
,Sequence
,None
)]) – Metrics to compute for training and evaluation. Can either be an metric from the torchmetrics package, a custom metric inheriting from torchmetrics.Metric, a callable function or a list/dict containing a combination of the aforementioned. In all cases, each metric needs to have the signature metric(preds,target) and return a single scalar tensor.output_transform¶ (
Optional
[TypeVar
(OUTPUT_TRANSFORM_TYPE
, flash.core.data.io.output_transform.OutputTransform,None
)]) –OutputTransform
to use as the default for this task.
- static apply_filtering(y, y_hat)[source]¶
This function is used to filter some labels or predictions which aren’t conform.
- as_embedder(layer)[source]¶
Convert this task to an embedder. Note that the parameters are not copied so that any optimization of the embedder will also apply to the converted
Task
.- Parameters
layer¶ (
str
) – The layer to embed to. This should be one of theavailable_layers()
.
- classmethod available_finetuning_strategies()[source]¶
Returns a list containing the keys of the available Finetuning Strategies.
- available_layers()[source]¶
Get the list of available layers for use with the
as_embedder()
method.
- classmethod available_lr_schedulers()[source]¶
Returns a list containing the keys of the available LR schedulers.
- classmethod available_optimizers()[source]¶
Returns a list containing the keys of the available Optimizers.
- classmethod available_outputs()[source]¶
Returns the list of available outputs (that can be used during prediction or serving) for this
Task
.Examples
..testsetup:
>>> from flash import Task
>>> print(Task.available_outputs()) ['preds', 'raw']
- configure_optimizers()[source]¶
Implement how optimizer and optionally learning rate schedulers should be configured.
- get_num_training_steps()[source]¶
Total training steps inferred from datamodule and devices.
- Return type
- modules_to_freeze()[source]¶
By default, we try to get the
backbone
attribute from the task and return it orNone
if not present.
- serve(host='127.0.0.1', port=8000, sanity_check=True, input_cls=None, transform=<class 'flash.core.data.io.input_transform.InputTransform'>, transform_kwargs=None, output=None)[source]¶
Serve the
Task
. Override this method to provide a defaultinput_cls
,transform
, andtransform_kwargs
.- Parameters
sanity_check¶ (
bool
) – IfTrue
, runs a sanity check before serving.input_cls¶ (
Optional
[Type
[ServeInput
]]) – TheServeInput
type to use.transform¶ (
TypeVar
(INPUT_TRANSFORM_TYPE
,Type
[flash.core.data.io.input_transform.InputTransform],Callable
,Tuple
[Union
[StrEnum
,str
],Dict
[str
,Any
]],Union
[StrEnum
,str
],None
)) – The transform to use when serving.transform_kwargs¶ (
Optional
[Dict
]) – Keyword arguments used to instantiate the transform.
- Return type
- step(batch, batch_idx, metrics)[source]¶
Implement the core logic for the training/validation/test step. By default this includes:
Inference on the current batch
Calculating the loss
Calculating relevant metrics
Override for custom behavior.
- Parameters
- Return type
- Returns
A dict containing both the loss and relevant metrics