Logger that sends system metrics to W&B.
log_freq: Union[LogStrategy, int] = "epoch",
initial_global_step: int = 0,
) -> None
WandbMetricsLogger automatically logs the
logs dictionary that callback methods
take as argument to wandb.
This callback automatically logs the following to a W&B run page:
- system (CPU/GPU/TPU) metrics,
- train and validation metrics defined in
- learning rate (both for a fixed value or a learning rate scheduler)
If you resume training by passing
model.fit and you are using a
learning rate scheduler, make sure to pass
step_size * initial_step, where
step_size is number of training steps per epoch.
step_size can be calculated as
the product of the cardinality of the training dataset and the batch size.
|log_freq ("epoch", "batch", or int): if "epoch", logs metrics at the end of each epoch. If "batch", logs metrics at the end of each batch. If an integer, logs metrics at the end of that many batches. Defaults to "epoch". initial_global_step (int): Use this argument to correcly log the learning rate when you resume training from some |