PyTorch

Usage Examples

Try our integration out in a colab notebook (with video walkthrough below) or see our example repo for scripts, including one on hyperparameter optimization using Hyperband on Fashion MNIST, plus the W&B Dashboard it generates.

Follow along with a video tutorial!

Using wandb.watch

W&B provides first class support for PyTorch. To automatically log gradients and store the network topology, you can call watch and pass in your PyTorch model.

import wandb
wandb.init(config=args)
# Magic
wandb.watch(model, log_freq=100)
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
wandb.log({"loss": loss})

Gradients, metrics and the graph won't be logged until wandb.log is called after a forward and backward pass.

Options

By default the hook only logs gradients.

Arguments

Options

log

  • all: log histograms of both gradients and parameters

  • gradients : log histograms of gradients (default)

  • parameters : log histograms of parameters

  • None

log_freq

integer (default 1000): The number of steps between logging gradients/parameters

Images

You can pass PyTorch tensors with image data into wandb.Image and torchvision utils will be used to log them automatically.

To log images and view them in the Media panel, you can use the following syntax:

wandb.log({"examples" : [wandb.Image(i) for i in images]})

Multiple Models

If you need to track multiple models in the same script, you can call wandb.watch on each model separately.