This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

fastai

If you’re using fastai to train your models, W&B has an easy integration using the WandbCallback. Explore the details in interactive docs with examples →

Log with W&B

  1. Sign up for a free account at https://wandb.ai/site and then log in to your wandb account.

  2. Install the wandb library on your machine in a Python 3 environment using pip

  3. log in to the wandb library on your machine.

    1. Find your API key https://wandb.ai/authorize.

      • Command line:
        pip install wandb
        wandb login
        
      • Notebook:
        !pip install wandb
        
        import wandb
        wandb.login()
        
    2. Add the WandbCallback to the learner or fit method:

      import wandb
      from fastai.callback.wandb import *
      
      # start logging a wandb run
      wandb.init(project="my_project")
      
      # To log only during one training phase
      learn.fit(..., cbs=WandbCallback())
      
      # To log continuously for all training phases
      learn = learner(..., cbs=WandbCallback())
      

WandbCallback Arguments

WandbCallback accepts the following arguments:

Args Description
log Whether to log the model’s: gradients , parameters, all or None (default). Losses & metrics are always logged.
log_preds whether we want to log prediction samples (default to True).
log_preds_every_epoch whether to log predictions every epoch or at the end (default to False)
log_model whether we want to log our model (default to False). This also requires SaveModelCallback
model_name The name of the file to save, overrides SaveModelCallback
log_dataset
  • False (default)
  • True will log folder referenced by learn.dls.path.
  • a path can be defined explicitly to reference which folder to log.

Note: subfolder “models” is always ignored.

dataset_name name of logged dataset (default to folder name).
valid_dl DataLoaders containing items used for prediction samples (default to random items from learn.dls.valid.
n_preds number of logged predictions (default to 36).
seed used for defining random samples.

For custom workflows, you can manually log your datasets and models:

  • log_dataset(path, name=None, metadata={})
  • log_model(path, name=None, metadata={})

Note: any subfolder “models” will be ignored.

Distributed Training

fastai supports distributed training by using the context manager distrib_ctx. W&B supports this automatically and enables you to track your Multi-GPU experiments out of the box.

Review this minimal example:

import wandb
from fastai.vision.all import *
from fastai.distributed import *
from fastai.callback.wandb import WandbCallback

wandb.require(experiment="service")
path = rank0_first(lambda: untar_data(URLs.PETS) / "images")

def train():
    dls = ImageDataLoaders.from_name_func(
        path,
        get_image_files(path),
        valid_pct=0.2,
        label_func=lambda x: x[0].isupper(),
        item_tfms=Resize(224),
    )
    wandb.init("fastai_ddp", entity="capecape")
    cb = WandbCallback()
    learn = vision_learner(dls, resnet34, metrics=error_rate, cbs=cb).to_fp16()
    with learn.distrib_ctx(sync_bn=False):
        learn.fit(1)

if __name__ == "__main__":
    train()

Then, in your terminal you will execute:

$ torchrun --nproc_per_node 2 train.py

in this case, the machine has 2 GPUs.

You can now run distributed training directly inside a notebook.

import wandb
from fastai.vision.all import *

from accelerate import notebook_launcher
from fastai.distributed import *
from fastai.callback.wandb import WandbCallback

wandb.require(experiment="service")
path = untar_data(URLs.PETS) / "images"

def train():
    dls = ImageDataLoaders.from_name_func(
        path,
        get_image_files(path),
        valid_pct=0.2,
        label_func=lambda x: x[0].isupper(),
        item_tfms=Resize(224),
    )
    wandb.init("fastai_ddp", entity="capecape")
    cb = WandbCallback()
    learn = vision_learner(dls, resnet34, metrics=error_rate, cbs=cb).to_fp16()
    with learn.distrib_ctx(in_notebook=True, sync_bn=False):
        learn.fit(1)

notebook_launcher(train, num_processes=2)

Log only on the main process

In the examples above, wandb launches one run per process. At the end of the training, you will end up with two runs. This can sometimes be confusing, and you may want to log only on the main process. To do so, you will have to detect in which process you are manually and avoid creating runs (calling wandb.init in all other processes)

import wandb
from fastai.vision.all import *
from fastai.distributed import *
from fastai.callback.wandb import WandbCallback

wandb.require(experiment="service")
path = rank0_first(lambda: untar_data(URLs.PETS) / "images")

def train():
    cb = []
    dls = ImageDataLoaders.from_name_func(
        path,
        get_image_files(path),
        valid_pct=0.2,
        label_func=lambda x: x[0].isupper(),
        item_tfms=Resize(224),
    )
    if rank_distrib() == 0:
        run = wandb.init("fastai_ddp", entity="capecape")
        cb = WandbCallback()
    learn = vision_learner(dls, resnet34, metrics=error_rate, cbs=cb).to_fp16()
    with learn.distrib_ctx(sync_bn=False):
        learn.fit(1)

if __name__ == "__main__":
    train()

in your terminal call:

$ torchrun --nproc_per_node 2 train.py
import wandb
from fastai.vision.all import *

from accelerate import notebook_launcher
from fastai.distributed import *
from fastai.callback.wandb import WandbCallback

wandb.require(experiment="service")
path = untar_data(URLs.PETS) / "images"

def train():
    cb = []
    dls = ImageDataLoaders.from_name_func(
        path,
        get_image_files(path),
        valid_pct=0.2,
        label_func=lambda x: x[0].isupper(),
        item_tfms=Resize(224),
    )
    if rank_distrib() == 0:
        run = wandb.init("fastai_ddp", entity="capecape")
        cb = WandbCallback()
    learn = vision_learner(dls, resnet34, metrics=error_rate, cbs=cb).to_fp16()
    with learn.distrib_ctx(in_notebook=True, sync_bn=False):
        learn.fit(1)

notebook_launcher(train, num_processes=2)

Examples

1 - fastai v1

For scripts using fastai v1, we have a callback that can automatically log model topology, losses, metrics, weights, gradients, sample predictions and best trained model.

import wandb
from wandb.fastai import WandbCallback

wandb.init()

learn = cnn_learner(data, model, callback_fns=WandbCallback)
learn.fit(epochs)

Requested logged data is configurable through the callback constructor.

from functools import partial

learn = cnn_learner(
    data, model, callback_fns=partial(WandbCallback, input_type="images")
)

It is also possible to use WandbCallback only when starting training. In this case it must be instantiated.

learn.fit(epochs, callbacks=WandbCallback(learn))

Custom parameters can also be given at that stage.

learn.fit(epochs, callbacks=WandbCallback(learn, input_type="images"))

Example Code

We’ve created a few examples for you to see how the integration works:

Fastai v1

Options

WandbCallback() class supports a number of options:

Keyword argument Default Description
learn N/A the fast.ai learner to hook.
save_model True save the model if it’s improved at each step. It will also load best model at the end of training.
mode auto min, max, or auto: How to compare the training metric specified in monitor between steps.
monitor None training metric used to measure performance for saving the best model. None defaults to validation loss.
log gradients gradients, parameters, all, or None. Losses & metrics are always logged.
input_type None images or None. Used to display sample predictions.
validation_data None data used for sample predictions if input_type is set.
predictions 36 number of predictions to make if input_type is set and validation_data is None.
seed 12345 initialize random generator for sample predictions if input_type is set and validation_data is None.