Documentation
Search…
MosaicML Composer
State of the art algorithms to train your neural networks
Composer is a library for training neural networks better, faster, and cheaper. It contains many state-of-the-art methods for accelerating neural network training and improving generalization, along with an optional Trainer API that makes composing many different enhancements easy.
W&B provides a lightweight wrapper for logging your ML experiments. But you don't need to combine the two yourself: Weights & Biases is incorporated directly into the Composer library via the WandBLogger.

Start logging to W&B with two lines of code

1
from composer import Trainer
2
from composer.loggers import WandBLogger
3

4
wandb_logger = WandBLogger(init_params=init_params)
5
trainer = Trainer(..., logger=wandb_logger)
Copied!
Interactive dashboards accessible anywhere, and more!

Using Composer's WandBLogger

Composer library has the WandBLogger class that can be used along the Trainer to log metrics to Weights and Biases. It is a simple as instantiating the logger and passing it to the Trainer
1
wandb_logger = WandBLogger()
2
trainer = Trainer(logger=wandb_logger)
Copied!

Logger arguments

Below are some of the most used parameters in WandbLogger, see the Composer documentation for a full list and description
Parameter
Description
init_params
Params to pass to wandb.init such as your wandb project, entity, name or config etc See here for the full list wandb.init accepts
log_artifacts
Wheter to log checkpoints to wandb
log_artifacts_every_n_batches
Interval at which to upload Artirfacts. Only applicable when log_artifacts=True
rank_zero_only
Whether to log only on the rank-zero process. When logging artifacts to wandb, it is highly recommended to log on all ranks. Artifacts from ranks ≥1 will not be stored, which may discard pertinent information. For example, when using Deepspeed ZeRO, it would be impossible to restore from checkpoints without artifacts from all ranks (default: False).
A typical usage would be:
1
init_params = {"project":"composer",
2
"name":"imagenette_benchmark",
3
"config":{"arch":"Resnet50",
4
"use_mixed_precision":True
5
}
6
}
7
8
wandb_logger = WandBLogger(log_artifacts=True, init_params=init_params)
Copied!

Log prediction samples

You can use Composer's Callbacks system to control when you log to Weights & Biases via the WandBLogger, in this example we log a sample of our validation images and predictions:
Log Image Predictions
1
import wandb
2
from composer import Callback, State, Logger
3
4
class LogPredictions(Callback):
5
def __init__(self, num_samples=100, seed=1234):
6
super().__init__()
7
self.num_samples = num_samples
8
self.data = []
9
10
def eval_batch_end(self, state: State, logger: Logger):
11
"""Compute predictions per batch and stores them on self.data"""
12
13
if state.timer.epoch == state.max_duration: #on last val epoch
14
if len(self.data) < self.num_samples:
15
n = self.num_samples
16
x, y = state.batch_pair
17
outputs = state.outputs.argmax(-1)
18
data = [[wandb.Image(x_i), y_i, y_pred] for x_i, y_i, y_pred in list(zip(x[:n], y[:n], outputs[:n]))]
19
self.data += data
20
21
def eval_end(self, state: State, logger: Logger):
22
"Create a wandb.Table and logs it"
23
columns = ['image', 'ground truth', 'prediction']
24
table = wandb.Table(columns=columns, data=self.data[:self.num_samples])
25
wandb.log({'sample_table':table}, step=int(state.timer.batch))
26
...
27
28
trainer = Trainer(
29
...
30
loggers=[WandBLogger()],
31
callbacks=[LogPredictions()]
32
)
Copied!
Last modified 3mo ago