Create and track plots from experiments
Create and track plots from machine learning experiments.
4 minute read
Log a dictionary of metrics, media, or custom objects to a step with the W&B Python SDK. W&B collects the key-value pairs during each step and stores them in one unified dictionary each time you log data with wandb.Run.log()
. Data logged from your script is saved locally to your machine in a directory called wandb
, then synced to the W&B cloud or your private server.
step
.Each call to wandb.Run.log()
is a new step
by default. W&B uses steps as the default x-axis when it creates charts and panels. You can optionally create and use a custom x-axis or capture a custom summary metric. For more information, see Customize log axes.
wandb.Run.log()
to log consecutive values for each step
: 0, 1, 2, and so on. It is not possible to write to a specific history step. W&B only writes to the “current” and “next” step.W&B automatically logs the following information during a W&B Experiment:
nvidia-smi
.Turn on Code Saving in your account’s Settings page to log:
diff.patch
file if there are any uncommitted changes.requirements.txt
file will be uploaded and shown on the files tab of the run page, along with any files you save to the wandb
directory for the run.With W&B, you can decide exactly what you want to log. The following lists some commonly logged objects:
wandb.plot()
with wandb.Run.log()
to track charts. See Log Plots for more information.wandb.Table
to log data to visualize and query with W&B. See Log Tables for more information.wandb.Run.watch(model)
to see gradients of the weights as histograms in the UI.wandb.init(config=your_config_dictionary)
. See the PyTorch Integrations page for more information.wandb.Run.log()
to see metrics from your model. If you log metrics like accuracy and loss from inside your training loop, you’ll get live updating graphs in the UI.Due to GraphQL limitations, metric names in W&B must follow specific naming rules:
/^[_a-zA-Z][_a-zA-Z0-9]*$/
Valid metric names:
with wandb.init() as run:
run.log({"accuracy": 0.9, "val_loss": 0.1, "epoch_5": 5})
run.log({"modelAccuracy": 0.95, "learning_rate": 0.001})
Invalid metric names (avoid these):
with wandb.init() as run:
run.log({"acc,val": 0.9}) # Contains comma
run.log({"loss-train": 0.1}) # Contains hyphen
run.log({"test acc": 0.95}) # Contains space
run.log({"5_fold_cv": 0.8}) # Starts with number
Compare the best accuracy: To compare the best value of a metric across runs, set the summary value for that metric. By default, summary is set to the last value you logged for each key. This is useful in the table in the UI, where you can sort and filter runs based on their summary metrics, to help compare runs in a table or bar chart based on their best accuracy, instead of final accuracy. For example: wandb.run.summary["best_accuracy"] = best_accuracy
View multiple metrics on one chart: Log multiple metrics in the same call. For example:
with wandb.init() as run:
run.log({"acc": 0.9, "loss": 0.1})
You can then plot both metrics in the UI.
Customize the x-axis: Add a custom x-axis to the same log call to visualize your metrics against a different axis in the W&B dashboard. For example:
with wandb.init() as run:
run.log({'acc': 0.9, 'epoch': 3, 'batch': 117})
To set the default x-axis for a given metric use Run.define_metric().
Log rich media and charts: wandb.Run.log()
supports the logging of a wide variety of data types, from media like images and videos to tables and charts.
For best practices and tips for Experiments and logging, see Best Practices: Experiments and Logging.
Create and track plots from machine learning experiments.
Use W&B to log distributed training experiments with multiple GPUs.
Log rich media, from 3D point clouds and molecules to HTML and histograms
Log tables with W&B.
Importing and logging data into W&B
Was this page helpful?
Glad to hear it! If you have more to say, please let us know.
Sorry to hear that. Please tell us how we can improve.