Track and visualize experiments in real time, compare baselines, and iterate quickly on ML projects
wandb.log(): Log metrics over time in a training loop, such as accuracy and loss. By default, when you call
wandb.logit appends a new step to the
historyobject and updates the
history: An array of dictionary-like objects that tracks metrics over time. These time series values are shown as default line plots in the UI.
summary: By default, the final value of a metric logged with wandb.log(). You can set the summary for a metric manually to capture the highest accuracy or lowest loss instead of the final value. These values are used in the table, and plots that compare runs — for example, you could visualize at the final accuracy for all runs in your project.
wandblibrary is incredibly flexible. Here are some suggested guidelines.
- 1.Config: Track hyperparameters, architecture, dataset, and anything else you'd like to use to reproduce your model. These will show up in columns— use config columns to group, sort, and filter runs dynamically in the app.
- 2.Project: A project is a set of experiments you can compare together. Each project gets a dedicated dashboard page, and you can easily turn on and off different groups of runs to compare different model versions.
- 3.Notes: A quick commit message to yourself, the note can be set from your script and is editable in the table.
- 4.Tags: Identify baseline runs and favorite runs. You can filter runs using tags, and they're editable in the table.
config = dict (
learning_rate = 0.01,
momentum = 0.2,
architecture = "CNN",
dataset_id = "peds-0192",
infra = "AWS",
- Git commit: Pick up the latest git commit and see it on the overview tab of the run page, as well as a
diff.patchfile if there are any uncommitted changes.
- Dependencies: The
requirements.txtfile will be uploaded and shown on the files tab of the run page, along with any files you save to the
wandbdirectory for the run.
Where data and model metrics are concerned, you get to decide exactly what you want to log.
- Dataset: You have to specifically log images or other dataset samples for them to stream to W&B.
- PyTorch gradients: Add
wandb.watch(model)to see gradients of the weights as histograms in the UI.
- Configuration info: Log hyperparameters, a link to your dataset, or the name of the architecture you're using as config parameters, passed in like this:
- Metrics: Use
wandb.logto see metrics from your model. If you log metrics like accuracy and loss from inside your training loop, you'll get live updating graphs in the UI.