Skip to main content


Try in a Colab Notebook here →

MMDetection is an open source object detection toolbox based on PyTorch and is part of the OpenMMLab. It offers composable and modular API design which you can use to easily build custom object detection and segmentation pipelines.

Weights and Biases is directly integrated into MMDetection through a dedicated MMDetWandbHook that can be used to:

✅ Log training and evaluation metrics.

✅ Log versioned model checkpoints.

✅ Log versioned validation dataset with ground truth bounding boxes.

✅ Log and visualize model predictions.

🔥 Getting Started

Sign up and Log in to wandb

a) Sign up for a free account

b) Pip install the wandb library

c) To log in in your training script, you'll need to be signed in to you account at, then you will find your API key on the Authorize page.

If you are using Weights and Biases for the first time you might want to check out our quickstart

pip install wandb
wandb login

Using MMDetWandbHook

You can get started with Weights and Biases by adding the MMDetWandbHook to the MMDetection log_config method of the config system.


MMDetWandbHook is supported by MMDetection v2.25.0 and above.

import wandb


config_file = "mmdetection/configs/path/to/"
cfg = Config.fromfile(config_file)

cfg.log_config.hooks = [
init_kwargs={"project": "mmdetection"},
init_kwargs(dict) A dict passed to wandb.init to initialize a W&B run.
interval(int) Logging interval (every k iterations). Defaults to 50.
log_checkpoint(bool) Save the checkpoint at every checkpoint interval as W&B Artifacts. Use this for model versioning where each version is a checkpoint. Defaults to False.
log_checkpoint_metadata(bool) Log the evaluation metrics computed on the validation data with the checkpoint, along with current epoch as a metadata to that checkpoint. Defaults to True.
num_eval_images(int) The number of validation images to be logged. If zero, the evaluation won't be logged. Defaults to 100.
bbox_score_thr(float) Threshold for bounding box scores. Defaults to 0.3.

:chart_with_upwards_trend: Log Metrics

Start tracking train and eval metrics by using the init_kwargs argument in MMDetWandbHook. This argument takes in a dictionary of key-value pairs which in turn is passed to wandb.init which controls which project your run is logged to as well as other features of your run.

'project': 'mmdetection',
'entity': 'my_team_name',
'config': {'lr': 1e-4, 'batch_size':32},
'tags': ['resnet50', 'sgd']

Check out all the arguments for wandb.init here

:checkered_flag: Checkpointing

You can reliably store these checkpoints as W&B Artifacts by using the log_checkpoint=True argument in MMDetWandbHook. This feature depends on the MMCV's CheckpointHook that periodically save the model checkpoints. The period is determined by checkpoint_config.interval.


Every W&B account comes with 100 GB of free storage for datasets and models.

The checkpoints are shown as different versions on the left-hand side pane. You can download the model from the Files tab or use API to download it programmatically.

📣 Checkpoint with Metadata

If log_checkpoint_metadata is True, every checkpoint version will have metadata associated with it. This feature depends on the CheckpointHook as well as EvalHook or DistEvalHook. The metadata is logged only when the checkpoint interval is divisible by evaluation interval.

The logged metadata is displayed under the Metadata tab.

Visualize Dataset and Model Prediction

The ability to interactively visualize the dataset and especially the model prediction can help build and debug better models. Using MMDetWandbHook you can now log the validation data as W&B Tables and create versioned W&B Tables for model prediction.

The num_eval_images argument controls the number of validation samples that are logged as W&B Tables. Here are a few things to note:

  • If the num_eval_images=0 the validation data, as well as model predictions, will not be logged.
  • If validate=False for mmdet.core.train_detector API, the validation data and model predictions will not be logged.
  • If the num_eval_images is greater than the total number of validation samples, the complete validation dataset is logged.

The val_data is uploaded only once. The run_<id>_predtable and subsequent runs use referencing to the uploaded data to save memory. A new version of val_data is created only when it's changed.

Next Steps

If you want to train an instance segmentation model (Mask R-CNN) on a custom dataset, you can check out our How to Use Weights & Biases with MMDetection W&B Report on Fully Connected.

Any questions or issues about this Weights & Biases integration? Open an issue in the MMDetection github repository and we'll catch it and get you an answer :)

Was this page helpful?👍👎