Skip to main content

Track experiments

Try in a Colab Notebook here β†’

Rapid experimentation is fundamental to machine learning. In this tutorial, we useΒ W&B to track and visualize experiments so that we can quickly iterate and understand our results.

🀩 A shared dashboard for your experiments​

With just a few lines of code, you'll get rich, interactive, shareable dashboards which you can see yourself here.

πŸ”’ Data & Privacy​

We take security very seriously, and our cloud-hosted dashboard uses industry standard best practices for encryption. If you're working with datasets that cannot leave your enterprise cluster, we have on-prem installations available.

It's also easy to download all your data and export it to other tools β€” like custom analysis in a Jupyter notebook. Here's more on our API.


πŸͺ„ Install wandb library and login​

Start by installing the library and logging in to your free account.

!pip install wandb -qU
# Log in to your W&B account
import wandb
wandb.login()

πŸ‘Ÿ Run an experiment​

1️⃣. Start a new run and pass in hyperparameters to track

2️⃣. Log metrics from training or evaluation

3️⃣. Visualize results in the dashboard

import random

# Launch 5 simulated experiments
total_runs = 5
for run in range(total_runs):
# 🐝 1️⃣ Start a new run to track this script
wandb.init(
# Set the project where this run will be logged
project="basic-intro",
# We pass a run name (otherwise it’ll be randomly assigned, like sunshine-lollypop-10)
name=f"experiment_{run}",
# Track hyperparameters and run metadata
config={
"learning_rate": 0.02,
"architecture": "CNN",
"dataset": "CIFAR-100",
"epochs": 10,
})

# This simple block simulates a training loop logging metrics
epochs = 10
offset = random.random() / 5
for epoch in range(2, epochs):
acc = 1 - 2 ** -epoch - random.random() / epoch - offset
loss = 2 ** -epoch + random.random() / epoch + offset

# 🐝 2️⃣ Log metrics from your script to W&B
wandb.log({"acc": acc, "loss": loss})

# Mark the run as finished
wandb.finish()

3️⃣ When you run this code, you can find your interactive dashboard by clicking any of the πŸ‘† wandb links above.

πŸ”₯ Simple Pytorch Neural Network

πŸ’ͺ Run this model to train a simple MNIST classifier, and click on the project page link to see your results stream in live to a W&B project.

Any run in wandb automatically logs metrics, system information, hyperparameters, terminal output and you'll see an interactive table with model inputs and outputs.

Set up Dataloader​

To run this example, we'll need to install PyTorch. If you're using Google Colab, it is already preinstalled.

!pip install torch torchvision
import wandb
import math
import random
import torch, torchvision
import torch.nn as nn
import torchvision.transforms as T

device = "cuda:0" if torch.cuda.is_available() else "cpu"

def get_dataloader(is_train, batch_size, slice=5):
"Get a training dataloader"
full_dataset = torchvision.datasets.MNIST(root=".", train=is_train, transform=T.ToTensor(), download=True)
sub_dataset = torch.utils.data.Subset(full_dataset, indices=range(0, len(full_dataset), slice))
loader = torch.utils.data.DataLoader(dataset=sub_dataset,
batch_size=batch_size,
shuffle=True if is_train else False,
pin_memory=True, num_workers=2)
return loader

def get_model(dropout):
"A simple model"
model = nn.Sequential(nn.Flatten(),
nn.Linear(28*28, 256),
nn.BatchNorm1d(256),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(256,10)).to(device)
return model

def validate_model(model, valid_dl, loss_func, log_images=False, batch_idx=0):
"Compute performance of the model on the validation dataset and log a wandb.Table"
model.eval()
val_loss = 0.
with torch.inference_mode():
correct = 0
for i, (images, labels) in enumerate(valid_dl):
images, labels = images.to(device), labels.to(device)

# Forward pass ➑
outputs = model(images)
val_loss += loss_func(outputs, labels)*labels.size(0)

# Compute accuracy and accumulate
_, predicted = torch.max(outputs.data, 1)
correct += (predicted == labels).sum().item()

# Log one batch of images to the dashboard, always same batch_idx.
if i==batch_idx and log_images:
log_image_table(images, predicted, labels, outputs.softmax(dim=1))
return val_loss / len(valid_dl.dataset), correct / len(valid_dl.dataset)

def log_image_table(images, predicted, labels, probs):
"Log a wandb.Table with (img, pred, target, scores)"
# 🐝 Create a wandb Table to log images, labels and predictions to
table = wandb.Table(columns=["image", "pred", "target"]+[f"score_{i}" for i in range(10)])
for img, pred, targ, prob in zip(images.to("cpu"), predicted.to("cpu"), labels.to("cpu"), probs.to("cpu")):
table.add_data(wandb.Image(img[0].numpy()*255), pred, targ, *prob.numpy())
wandb.log({"predictions_table":table}, commit=False)

Train Your Model​

# Launch 5 experiments, trying different dropout rates
for _ in range(5):
# 🐝 initialise a wandb run
wandb.init(
project="pytorch-intro",
config={
"epochs": 10,
"batch_size": 128,
"lr": 1e-3,
"dropout": random.uniform(0.01, 0.80),
})

# Copy your config
config = wandb.config

# Get the data
train_dl = get_dataloader(is_train=True, batch_size=config.batch_size)
valid_dl = get_dataloader(is_train=False, batch_size=2*config.batch_size)
n_steps_per_epoch = math.ceil(len(train_dl.dataset) / config.batch_size)

# A simple MLP model
model = get_model(config.dropout)

# Make the loss and optimizer
loss_func = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=config.lr)

# Training
example_ct = 0
step_ct = 0
for epoch in range(config.epochs):
model.train()
for step, (images, labels) in enumerate(train_dl):
images, labels = images.to(device), labels.to(device)

outputs = model(images)
train_loss = loss_func(outputs, labels)
optimizer.zero_grad()
train_loss.backward()
optimizer.step()

example_ct += len(images)
metrics = {"train/train_loss": train_loss,
"train/epoch": (step + 1 + (n_steps_per_epoch * epoch)) / n_steps_per_epoch,
"train/example_ct": example_ct}

if step + 1 < n_steps_per_epoch:
# 🐝 Log train metrics to wandb
wandb.log(metrics)

step_ct += 1

val_loss, accuracy = validate_model(model, valid_dl, loss_func, log_images=(epoch==(config.epochs-1)))

# 🐝 Log train and validation metrics to wandb
val_metrics = {"val/val_loss": val_loss,
"val/val_accuracy": accuracy}
wandb.log({**metrics, **val_metrics})

print(f"Train Loss: {train_loss:.3f}, Valid Loss: {val_loss:3f}, Accuracy: {accuracy:.2f}")

# If you had a test set, this is how you could log it as a Summary metric
wandb.summary['test_accuracy'] = 0.8

# 🐝 Close your wandb run
wandb.finish()

You have now trained your first model using wandb! πŸ‘† Click on the wandb link above to see your metrics

πŸ”” Try W&B Alerts

W&B Alerts allows you to send alerts, triggered from your Python code, to your Slack or email. There are 2 steps to follow the first time you'd like to send a Slack or email alert, triggered from your code:

1) Turn on Alerts in your W&B User Settings

2) Add wandb.alert() to your code:

wandb.alert(
title="Low accuracy",
text=f"Accuracy is below the acceptable threshold"
)

See the minimal example below to see how to use wandb.alert. You can find the full docs for W&B Alerts here

# Start a wandb run
wandb.init(project="pytorch-intro")

# Simulating a model training loop
acc_threshold = 0.3
for training_step in range(1000):

# Generate a random number for accuracy
accuracy = round(random.random() + random.random(), 3)
print(f'Accuracy is: {accuracy}, {acc_threshold}')

# 🐝 Log accuracy to wandb
wandb.log({"Accuracy": accuracy})

# πŸ”” If the accuracy is below the threshold, fire a W&B Alert and stop the run
if accuracy <= acc_threshold:
# 🐝 Send the wandb Alert
wandb.alert(
title='Low Accuracy',
text=f'Accuracy {accuracy} at step {training_step} is below the acceptable theshold, {acc_threshold}',
)
print('Alert triggered')
break

# Mark the run as finished (useful in Jupyter notebooks)
wandb.finish()

What's next?

The next tutorial, you will learn how to view & analyze model predictions using W&B Tables:

πŸ‘‰ View & Analyze Model Predictions​

Was this page helpful?πŸ‘πŸ‘Ž