Shortcuts

ImageClassificationData

class flash.image.classification.data.ImageClassificationData(train_input=None, val_input=None, test_input=None, predict_input=None, data_fetcher=None, transform=<class 'flash.core.data.io.input_transform.InputTransform'>, transform_kwargs=None, val_split=None, batch_size=None, num_workers=0, sampler=None, pin_memory=True, persistent_workers=False)[source]

The ImageClassificationData class is a DataModule with a set of classmethods for loading data for image classification.

classmethod from_csv(input_field, target_fields=None, train_file=None, train_images_root=None, train_resolver=None, val_file=None, val_images_root=None, val_resolver=None, test_file=None, test_images_root=None, test_resolver=None, predict_file=None, predict_images_root=None, predict_resolver=None, target_formatter=None, input_cls=<class 'flash.image.classification.input.ImageClassificationCSVInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Load the ImageClassificationData from CSV files containing image file paths and their corresponding targets.

Input images will be extracted from the input_field column in the CSV files. The supported file extensions are: .jpg, .jpeg, .png, .ppm, .bmp, .pgm, .tif, .tiff, .webp, and .npy. The targets will be extracted from the target_fields in the CSV files and can be in any of our supported classification target formats. To learn how to customize the transforms applied for each stage, read our customizing transforms guide.

Parameters
Return type

ImageClassificationData

Returns

The constructed ImageClassificationData.

Examples

The file train_data.csv contains the following:

images,targets
image_1.png,cat
image_2.png,dog
image_3.png,cat

The file predict_data.csv contains the following:

images
predict_image_1.png
predict_image_2.png
predict_image_3.png
>>> from flash import Trainer
>>> from flash.image import ImageClassifier, ImageClassificationData
>>> datamodule = ImageClassificationData.from_csv(
...     "images",
...     "targets",
...     train_file="train_data.csv",
...     train_images_root="train_folder",
...     predict_file="predict_data.csv",
...     predict_images_root="predict_folder",
...     transform_kwargs=dict(image_size=(128, 128)),
...     batch_size=2,
... )
>>> datamodule.num_classes
2
>>> datamodule.labels
['cat', 'dog']
>>> model = ImageClassifier(backbone="resnet18", num_classes=datamodule.num_classes)
>>> trainer = Trainer(fast_dev_run=True)
>>> trainer.fit(model, datamodule=datamodule)  
Training...
>>> trainer.predict(model, datamodule=datamodule)  
Predicting...
classmethod from_data_frame(input_field, target_fields=None, train_data_frame=None, train_images_root=None, train_resolver=None, val_data_frame=None, val_images_root=None, val_resolver=None, test_data_frame=None, test_images_root=None, test_resolver=None, predict_data_frame=None, predict_images_root=None, predict_resolver=None, target_formatter=None, input_cls=<class 'flash.image.classification.input.ImageClassificationDataFrameInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Load the ImageClassificationData from pandas DataFrame objects containing image files and their corresponding targets.

Input images will be extracted from the input_field in the DataFrame. The supported file extensions are: .jpg, .jpeg, .png, .ppm, .bmp, .pgm, .tif, .tiff, .webp, and .npy. The targets will be extracted from the target_fields in the DataFrame and can be in any of our supported classification target formats. To learn how to customize the transforms applied for each stage, read our customizing transforms guide.

Parameters
  • input_field (str) – The field (column name) in the DataFrames containing the image file paths.

  • target_fields (Union[str, Sequence[str], None]) – The field (column name) or list of fields in the DataFrames containing the targets.

  • train_data_frame (Optional[DataFrame]) – The pandas DataFrame to use when training.

  • train_images_root (Optional[str]) – The root directory containing train images.

  • train_resolver (Optional[Callable[[str, str], str]]) – Optionally provide a function which converts an entry from the input_field into an image file path.

  • val_data_frame (Optional[DataFrame]) – The pandas DataFrame to use when validating.

  • val_images_root (Optional[str]) – The root directory containing validation images.

  • val_resolver (Optional[Callable[[str, str], str]]) – Optionally provide a function which converts an entry from the input_field into an image file path.

  • test_data_frame (Optional[DataFrame]) – The pandas DataFrame to use when testing.

  • test_images_root (Optional[str]) – The root directory containing test images.

  • test_resolver (Optional[Callable[[str, str], str]]) – Optionally provide a function which converts an entry from the input_field into an image file path.

  • predict_data_frame (Optional[DataFrame]) – The pandas DataFrame to use when predicting.

  • predict_images_root (Optional[str]) – The root directory containing predict images.

  • predict_resolver (Optional[Callable[[str, str], str]]) – Optionally provide a function which converts an entry from the input_field into an image file path.

  • target_formatter (Optional[TargetFormatter]) – Optionally provide a TargetFormatter to control how targets are handled. See Formatting Classification Targets for more details.

  • input_cls (Type[Input]) – The Input type to use for loading the data.

  • transform (TypeVar(INPUT_TRANSFORM_TYPE, Type[flash.core.data.io.input_transform.InputTransform], Callable, Tuple[Union[LightningEnum, str], Dict[str, Any]], Union[LightningEnum, str], None)) – The InputTransform type to use.

  • transform_kwargs (Optional[Dict]) – Dict of keyword arguments to be provided when instantiating the transforms.

  • data_module_kwargs (Any) – Additional keyword arguments to provide to the DataModule constructor.

Return type

ImageClassificationData

Returns

The constructed ImageClassificationData.

Examples

>>> from pandas import DataFrame
>>> from flash import Trainer
>>> from flash.image import ImageClassifier, ImageClassificationData
>>> train_data_frame = DataFrame.from_dict(
...     {
...         "images": ["image_1.png", "image_2.png", "image_3.png"],
...         "targets": ["cat", "dog", "cat"],
...     }
... )
>>> predict_data_frame = DataFrame.from_dict(
...     {
...         "images": ["predict_image_1.png", "predict_image_2.png", "predict_image_3.png"],
...     }
... )
>>> datamodule = ImageClassificationData.from_data_frame(
...     "images",
...     "targets",
...     train_data_frame=train_data_frame,
...     train_images_root="train_folder",
...     predict_data_frame=predict_data_frame,
...     predict_images_root="predict_folder",
...     transform_kwargs=dict(image_size=(128, 128)),
...     batch_size=2,
... )
>>> datamodule.num_classes
2
>>> datamodule.labels
['cat', 'dog']
>>> model = ImageClassifier(backbone="resnet18", num_classes=datamodule.num_classes)
>>> trainer = Trainer(fast_dev_run=True)
>>> trainer.fit(model, datamodule=datamodule)  
Training...
>>> trainer.predict(model, datamodule=datamodule)  
Predicting...
classmethod from_datasets(train_dataset=None, val_dataset=None, test_dataset=None, predict_dataset=None, input_cls=<class 'flash.core.data.data_module.DatasetInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Creates a DataModule object from the given datasets using the Input of name DATASETS from the passed or constructed InputTransform.

Parameters
Return type

DataModule

Returns

The constructed data module.

Examples:

data_module = DataModule.from_datasets(
    train_dataset=train_dataset,
)
classmethod from_fiftyone(cls, train_dataset=None, val_dataset=None, test_dataset=None, predict_dataset=None, label_field='ground_truth', target_formatter=None, input_cls=<class 'flash.image.classification.input.ImageClassificationFiftyOneInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Load the ImageClassificationData from FiftyOne SampleCollection objects.

The supported file extensions are: .jpg, .jpeg, .png, .ppm, .bmp, .pgm, .tif, .tiff, .webp, and .npy. The targets will be extracted from the label_field in the SampleCollection objects and can be in any of our supported classification target formats. To learn how to customize the transforms applied for each stage, read our customizing transforms guide.

Parameters
Return type

ImageClassificationData

Returns

The constructed ImageClassificationData.

Examples

>>> import fiftyone as fo
>>> from flash import Trainer
>>> from flash.image import ImageClassifier, ImageClassificationData
>>> train_dataset = fo.Dataset.from_images(
...     ["image_1.png", "image_2.png", "image_3.png"]
... )  

...
>>> samples = [train_dataset[filepath] for filepath in train_dataset.values("filepath")]
>>> for sample, label in zip(samples, ["cat", "dog", "cat"]):
...     sample["ground_truth"] = fo.Classification(label=label)
...     sample.save()
...
>>> predict_dataset = fo.Dataset.from_images(
...     ["predict_image_1.png", "predict_image_2.png", "predict_image_3.png"]
... )  

...
>>> datamodule = ImageClassificationData.from_fiftyone(
...     train_dataset=train_dataset,
...     predict_dataset=predict_dataset,
...     transform_kwargs=dict(image_size=(128, 128)),
...     batch_size=2,
... )
>>> datamodule.num_classes
2
>>> datamodule.labels
['cat', 'dog']
>>> model = ImageClassifier(backbone="resnet18", num_classes=datamodule.num_classes)
>>> trainer = Trainer(fast_dev_run=True)
>>> trainer.fit(model, datamodule=datamodule)  
Training...
>>> trainer.predict(model, datamodule=datamodule)  
Predicting...
classmethod from_files(train_files=None, train_targets=None, val_files=None, val_targets=None, test_files=None, test_targets=None, predict_files=None, target_formatter=None, input_cls=<class 'flash.image.classification.input.ImageClassificationFilesInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Load the ImageClassificationData from lists of files and corresponding lists of targets.

The supported file extensions are: .jpg, .jpeg, .png, .ppm, .bmp, .pgm, .tif, .tiff, .webp, and .npy. The targets can be in any of our supported classification target formats. To learn how to customize the transforms applied for each stage, read our customizing transforms guide.

Parameters
Return type

ImageClassificationData

Returns

The constructed ImageClassificationData.

Examples

>>> from flash import Trainer
>>> from flash.image import ImageClassifier, ImageClassificationData
>>> datamodule = ImageClassificationData.from_files(
...     train_files=["image_1.png", "image_2.png", "image_3.png"],
...     train_targets=["cat", "dog", "cat"],
...     predict_files=["predict_image_1.png", "predict_image_2.png", "predict_image_3.png"],
...     transform_kwargs=dict(image_size=(128, 128)),
...     batch_size=2,
... )
>>> datamodule.num_classes
2
>>> datamodule.labels
['cat', 'dog']
>>> model = ImageClassifier(backbone="resnet18", num_classes=datamodule.num_classes)
>>> trainer = Trainer(fast_dev_run=True)
>>> trainer.fit(model, datamodule=datamodule)  
Training...
>>> trainer.predict(model, datamodule=datamodule)  
Predicting...
classmethod from_folders(train_folder=None, val_folder=None, test_folder=None, predict_folder=None, target_formatter=None, input_cls=<class 'flash.image.classification.input.ImageClassificationFolderInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Load the ImageClassificationData from folders containing images.

The supported file extensions are: .jpg, .jpeg, .png, .ppm, .bmp, .pgm, .tif, .tiff, .webp, and .npy. For train, test, and validation data, the folders are expected to contain a sub-folder for each class. Here’s the required structure:

train_folder
├── cat
│   ├── image_1.png
│   ├── image_3.png
│   ...
└── dog
    ├── image_2.png
    ...

For prediction, the folder is expected to contain the files for inference, like this:

predict_folder
├── predict_image_1.png
├── predict_image_2.png
├── predict_image_3.png
...

To learn how to customize the transforms applied for each stage, read our customizing transforms guide.

Parameters
Return type

ImageClassificationData

Returns

The constructed ImageClassificationData.

Examples

>>> from flash import Trainer
>>> from flash.image import ImageClassifier, ImageClassificationData
>>> datamodule = ImageClassificationData.from_folders(
...     train_folder="train_folder",
...     predict_folder="predict_folder",
...     transform_kwargs=dict(image_size=(128, 128)),
...     batch_size=2,
... )
>>> datamodule.num_classes
2
>>> datamodule.labels
['cat', 'dog']
>>> model = ImageClassifier(backbone="resnet18", num_classes=datamodule.num_classes)
>>> trainer = Trainer(fast_dev_run=True)
>>> trainer.fit(model, datamodule=datamodule)  
Training...
>>> trainer.predict(model, datamodule=datamodule)  
Predicting...
classmethod from_labelstudio(export_json=None, train_export_json=None, val_export_json=None, test_export_json=None, predict_export_json=None, data_folder=None, train_data_folder=None, val_data_folder=None, test_data_folder=None, predict_data_folder=None, input_cls=<class 'flash.core.integrations.labelstudio.input.LabelStudioImageClassificationInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, val_split=None, multi_label=False, **data_module_kwargs)[source]

Creates a DataModule object from the given export file and data directory using the Input of name FOLDERS from the passed or constructed InputTransform.

Parameters
  • export_json (Optional[str]) – path to label studio export file

  • train_export_json (Optional[str]) – path to label studio export file for train set.(overrides export_json if specified)

  • val_export_json (Optional[str]) – path to label studio export file for validation

  • test_export_json (Optional[str]) – path to label studio export file for test

  • predict_export_json (Optional[str]) – path to label studio export file for predict

  • data_folder (Optional[str]) – path to label studio data folder

  • train_data_folder (Optional[str]) – path to label studio data folder for train data set.(overrides data_folder if specified)

  • val_data_folder (Optional[str]) – path to label studio data folder for validation data

  • test_data_folder (Optional[str]) – path to label studio data folder for test data

  • predict_data_folder (Optional[str]) – path to label studio data folder for predict data

  • input_cls (Type[Input]) – The Input type to use for loading the data.

  • transform (TypeVar(INPUT_TRANSFORM_TYPE, Type[flash.core.data.io.input_transform.InputTransform], Callable, Tuple[Union[LightningEnum, str], Dict[str, Any]], Union[LightningEnum, str], None)) – The InputTransform type to use.

  • transform_kwargs (Optional[Dict]) – Dict of keyword arguments to be provided when instantiating the transforms.

  • val_split (Optional[float]) – The val_split argument to pass to the DataModule.

  • multi_label (Optional[bool]) – Whether the labels are multi encoded

  • data_module_kwargs (Any) – Additional keyword arguments to use when constructing the datamodule.

Return type

ImageClassificationData

Returns

The constructed data module.

Examples:

data_module = DataModule.from_labelstudio(
    export_json='project.json',
    data_folder='label-studio/media/upload',
    val_split=0.8,
)
classmethod from_numpy(train_data=None, train_targets=None, val_data=None, val_targets=None, test_data=None, test_targets=None, predict_data=None, target_formatter=None, input_cls=<class 'flash.image.classification.input.ImageClassificationNumpyInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Load the ImageClassificationData from numpy arrays (or lists of arrays) and corresponding lists of targets.

The targets can be in any of our supported classification target formats. To learn how to customize the transforms applied for each stage, read our customizing transforms guide.

Parameters
Return type

ImageClassificationData

Returns

The constructed ImageClassificationData.

Examples

>>> import numpy as np
>>> from flash import Trainer
>>> from flash.image import ImageClassifier, ImageClassificationData
>>> datamodule = ImageClassificationData.from_numpy(
...     train_data=[np.random.rand(3, 64, 64), np.random.rand(3, 64, 64), np.random.rand(3, 64, 64)],
...     train_targets=["cat", "dog", "cat"],
...     predict_data=[np.random.rand(3, 64, 64)],
...     transform_kwargs=dict(image_size=(128, 128)),
...     batch_size=2,
... )
>>> datamodule.num_classes
2
>>> datamodule.labels
['cat', 'dog']
>>> model = ImageClassifier(backbone="resnet18", num_classes=datamodule.num_classes)
>>> trainer = Trainer(fast_dev_run=True)
>>> trainer.fit(model, datamodule=datamodule)  
Training...
>>> trainer.predict(model, datamodule=datamodule)  
Predicting...
classmethod from_tensors(train_data=None, train_targets=None, val_data=None, val_targets=None, test_data=None, test_targets=None, predict_data=None, target_formatter=None, input_cls=<class 'flash.image.classification.input.ImageClassificationTensorInput'>, transform=<class 'flash.image.classification.input_transform.ImageClassificationInputTransform'>, transform_kwargs=None, **data_module_kwargs)[source]

Load the ImageClassificationData from torch tensors (or lists of tensors) and corresponding lists of targets.

The targets can be in any of our supported classification target formats. To learn how to customize the transforms applied for each stage, read our customizing transforms guide.

Parameters
Return type

ImageClassificationData

Returns

The constructed ImageClassificationData.

Examples

>>> import torch
>>> from flash import Trainer
>>> from flash.image import ImageClassifier, ImageClassificationData
>>> datamodule = ImageClassificationData.from_tensors(
...     train_data=[torch.rand(3, 64, 64), torch.rand(3, 64, 64), torch.rand(3, 64, 64)],
...     train_targets=["cat", "dog", "cat"],
...     predict_data=[torch.rand(3, 64, 64)],
...     transform_kwargs=dict(image_size=(128, 128)),
...     batch_size=2,
... )
>>> datamodule.num_classes
2
>>> datamodule.labels
['cat', 'dog']
>>> model = ImageClassifier(backbone="resnet18", num_classes=datamodule.num_classes)
>>> trainer = Trainer(fast_dev_run=True)
>>> trainer.fit(model, datamodule=datamodule)  
Training...
>>> trainer.predict(model, datamodule=datamodule)  
Predicting...
input_transform_cls

alias of flash.image.classification.input_transform.ImageClassificationInputTransform

set_block_viz_window(value)[source]

Setter method to switch on/off matplotlib to pop up windows.

Return type

None

Read the Docs v: latest
Versions
latest
stable
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.2
0.5.1
0.5.0
0.4.0
0.3.2
0.3.1
0.3.0
0.2.3
0.2.2
0.2.1
0.2.0
0.1.0post1
docs-fix_typing
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.