Skip to main content

MosaicML Composer

Try in a Colab Notebook here →

Composer is a library for training neural networks better, faster, and cheaper. It contains many state-of-the-art methods for accelerating neural network training and improving generalization, along with an optional Trainer API that makes composing many different enhancements easy.

W&B provides a lightweight wrapper for logging your ML experiments. But you don't need to combine the two yourself: Weights & Biases is incorporated directly into the Composer library via the WandBLogger.

Start logging to W&B with two lines of code

from composer import Trainer
from composer.loggers import WandBLogger

wandb_logger = WandBLogger(init_params=init_params)
trainer = Trainer(..., logger=wandb_logger)

Interactive dashboards accessible anywhere, and more!

Using Composer's WandBLogger

Composer library has the WandBLogger class that can be used along the Trainer to log metrics to Weights and Biases. It is a simple as instantiating the logger and passing it to the Trainer

wandb_logger = WandBLogger()
trainer = Trainer(logger=wandb_logger)

Logger arguments

Below are some of the most used parameters in WandbLogger, see the Composer documentation for a full list and description

init_paramsParams to pass to wandb.init such as your wandb project, entity, name or config etc See here for the full list wandb.init accepts
log_artifactsWhether to log checkpoints to wandb
log_artifacts_every_n_batchesInterval at which to upload Artifacts. Only applicable when log_artifacts=True
rank_zero_onlyWhether to log only on the rank-zero process. When logging artifacts **** to wandb, it is highly recommended to log on all ranks. Artifacts from ranks ≥1 will not be stored, which may discard pertinent information. For example, when using DeepSpeed ZeRO, it would be impossible to restore from checkpoints without artifacts from all ranks (default: False).

A typical usage would be:

init_params = {"project":"composer", 

wandb_logger = WandBLogger(log_artifacts=True, init_params=init_params)

Log prediction samples

You can use Composer's Callbacks system to control when you log to Weights & Biases via the WandBLogger, in this example we log a sample of our validation images and predictions:

import wandb
from composer import Callback, State, Logger

class LogPredictions(Callback):
def __init__(self, num_samples=100, seed=1234):
self.num_samples = num_samples = []

def eval_batch_end(self, state: State, logger: Logger):
"""Compute predictions per batch and stores them on"""

if state.timer.epoch == state.max_duration: #on last val epoch
if len( < self.num_samples:
n = self.num_samples
x, y = state.batch_pair
outputs = state.outputs.argmax(-1)
data = [[wandb.Image(x_i), y_i, y_pred] for x_i, y_i, y_pred in list(zip(x[:n], y[:n], outputs[:n]))] += data

def eval_end(self, state: State, logger: Logger):
"Create a wandb.Table and logs it"
columns = ['image', 'ground truth', 'prediction']
table = wandb.Table(columns=columns,[:self.num_samples])
wandb.log({'sample_table':table}, step=int(state.timer.batch))

trainer = Trainer(
Was this page helpful?👍👎