Decide how users install W&B
Before you start, decide whether W&B should be a required dependency or an optional feature of your library.Require W&B as a dependency
If W&B is central to your library’s functionality, add the W&B Python SDK (wandb) to your dependencies:
Make W&B optional on installation
If W&B is an optional feature, allow your library to run without it installed. You can either importwandb conditionally in Python or declare it as an optional dependency in pyproject.toml.
- Python
- pyproject.toml
Detect whether
wandb is available and raise a clear error if a user enables W&B features without installing it:Authenticate users
W&B uses API keys to authenticate users and machines.Create an API key
An API key authenticates a client or machine to W&B. You can generate an API key from your user profile.For a more streamlined approach, create an API key by going directly to User Settings. Copy the newly created API key immediately and save it in a secure location such as a password manager.
- Click your user profile icon in the upper right corner.
- Select User Settings, then scroll to the API Keys section.
Install and log in to W&B
To install thewandb library locally and log in:
- Command Line
- Python
- Python notebook
-
Set the
WANDB_API_KEYenvironment variable to your API key: -
Install the
wandblibrary and log in:
Start a run
A run represents a single unit of computation, such as a training experiment. Most libraries create one run per training job. For more information about runs, see W&B Runs. Initialize a run withwandb.init() and specify a name for your project and your team entity (team name). If you do not specify a project, W&B stores your run in a default project called “uncategorized”.:
run.finish() to close the run and log all the data to W&B.
Set wandb as an optional dependency
If you want to make wandb optional when your users use your library, you can either:
- Define a
wandbflag such as:
- Python
- Bash
- Or, set
wandbto bedisabledinwandb.init:
- Python
- Bash
- Or, set
wandbto be offline - note this will still runwandb, it just won’t try and communicate back to W&B over the internet:
- Environment Variable
- Bash
Define a run config
Provide a configuration dictionary when you initialize your run to log hyperparameters and other metadata to W&B. Use the W&B App to compare runs based on their config parameters and filter them in the Runs table. You can also use these parameters to group runs together in the W&B App. For example, in the following image, the batch size (bathch_size) was defined as a config parameter and is visible(see first column) in the Runs table. This allows users to filter and compare runs based on their batch size:
- Model name, version, architecture parameters, and hyperparameters.
- Dataset name, version, number of training or validation examples.
- Training parameters such as learning rate, batch size, and optimizer.
Update the run config
If values are not available at initialization time, update the config later withwandb.Run.config.update. For example, you might want to add a model’s parameters after the model is instantiated:
Log metrics and data
Log metrics
Create a dictionary where the key value is the name of the metric. Pass this dictionary object towandb.Run.log() to log it to W&B:
train/ and val/ for training and validation metrics, respectively, but you can use any prefix that makes sense for your use case.
This will create separate sections in your project’s workspace for your training and validation metrics, or other metric types you’d like to separate:

wandb.Run.log() for more details.
Control the x-axis
If you perform multiple calls towandb.Run.log() for the same training step, the wandb SDK increments an internal step counter for each call to wandb.Run.log(). This counter may not align with the training step in your training loop.
To avoid this situation, define your x-axis step explicitly with run.define_metric, one time, immediately after you call wandb.init:
*, means that every metric will use global_step as the x-axis in your charts. If you only want certain metrics to be logged against global_step, you can specify them instead:
step metric, and your global_step each time you call wandb.Run.log():
Log media and structured data
In addition to scalars, you can log images, tables, text, audio, video, and more. Some considerations when logging data include:- How often should the metric be logged? Should it be optional?
- What type of data could be helpful in visualizing?
- For images, you can log sample predictions, segmentation masks, etc., to see the evolution over time.
- For text, you can log tables of sample predictions for later exploration.
Support distributed training
For frameworks supporting distributed environments, you can adapt any of the following workflows:- Log only from the main process (recommended).
- Log from every process and group runs using a shared
groupname.
Track models and datasets with artifacts
Use W&B Artifacts to track and version models and datasets. Artifacts provide storage and versioning for machine learning assets, and they automatically track lineage to show how data and models are related.
- Whether to log model checkpoints or datasets as artifacts (in case you want to make it optional).
- Artifact input references (for example,
entity/project/artifact). - Logging frequency of model checkpoints or datasets. For example, every epoch, every 500 steps, and so on.
Log model checkpoints
Log model checkpoints to W&B. A common approach is to log checkpoints as artifacts using the unique run ID generated by W&B as part of the artifact name.Log input artifacts
Log datasets or pretrained models used as inputs:run.use_artifact(), which allows W&B to track the lineage of the dataset used in the run.
Download artifacts
Download previously logged artifacts from W&B to use in your training or inference code. If you have a run context, usewandb.Run.use_artifact() to reference an artifact in W&B and then call wandb.Artifact.download() to download it to a local directory.