This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

W&B Models

W&B Models is the system of record for ML Practitioners who want to organize their models, boost productivity and collaboration, and deliver production ML at scale.

With W&B Models, you can:

Machine learning practitioners rely on W&B Models as their ML system of record to track and visualize experiments, manage model versions and lineage, and optimize hyperparameters.

1 - Experiments

Track machine learning experiments with W&B.

Track machine learning experiments with a few lines of code. You can then review the results in an interactive dashboard or export your data to Python for programmatic access using our Public API.

Utilize W&B Integrations if you use popular frameworks such as PyTorch, Keras, or Scikit. See our Integration guides for a for a full list of integrations and information on how to add W&B to your code.

The image above shows an example dashboard where you can view and compare metrics across multiple runs.

How it works

Track a machine learning experiment with a few lines of code:

  1. Create a W&B run.
  2. Store a dictionary of hyperparameters, such as learning rate or model type, into your configuration (wandb.config).
  3. Log metrics (wandb.log()) over time in a training loop, such as accuracy and loss.
  4. Save outputs of a run, like the model weights or a table of predictions.

The proceeding pseudocode demonstrates a common W&B Experiment tracking workflow:

# 1. Start a W&B Run
wandb.init(entity="", project="my-project-name")

# 2. Save mode inputs and hyperparameters
wandb.config.learning_rate = 0.01

# Import model and data
model, dataloader = get_model(), get_data()

# Model training code goes here

# 3. Log metrics over time to visualize performance
wandb.log({"loss": loss})

# 4. Log an artifact to W&B
wandb.log_artifact(model)

How to get started

Depending on your use case, explore the following resources to get started with W&B Experiments:

  • Read the W&B Quickstart for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact.
  • Explore this chapter to learn how to:
    • Create an experiment
    • Configure experiments
    • Log data from experiments
    • View results from experiments
  • Explore the W&B Python Library within the W&B API Reference Guide.

1.1 - Create an experiment

Create a W&B Experiment.

Use the W&B Python SDK to track machine learning experiments. You can then review the results in an interactive dashboard or export your data to Python for programmatic access with the W&B Public API.

This guide describes how to use W&B building blocks to create a W&B Experiment.

How to create a W&B Experiment

Create a W&B Experiment in four steps:

  1. Initialize a W&B Run
  2. Capture a dictionary of hyperparameters
  3. Log metrics inside your training loop
  4. Log an artifact to W&B

Initialize a W&B run

At the beginning of your script call, the wandb.init() API to generate a background process to sync and log data as a W&B Run.

The proceeding code snippet demonstrates how to create a new W&B project named “cat-classification”. A note “My first experiment” was added to help identify this run. Tags “baseline” and “paper1” are included to remind us that this run is a baseline experiment intended for a future paper publication.

# Import the W&B Python Library
import wandb

# 1. Start a W&B Run
run = wandb.init(
    project="cat-classification",
    notes="My first experiment",
    tags=["baseline", "paper1"],
)

A Run object is returned when you initialize W&B with wandb.init(). Additionally, W&B creates a local directory where all logs and files are saved and streamed asynchronously to a W&B server.

Capture a dictionary of hyperparameters

Save a dictionary of hyperparameters such as learning rate or model type. The model settings you capture in config are useful later to organize and query your results.

#  2. Capture a dictionary of hyperparameters
wandb.config = {"epochs": 100, "learning_rate": 0.001, "batch_size": 128}

For more information on how to configure an experiment, see Configure Experiments.

Log metrics inside your training loop

Log metrics during each for loop (epoch), the accuracy and loss values are computed and logged to W&B with wandb.log(). By default, when you call wandb.log it appends a new step to the history object and updates the summary object.

The following code example shows how to log metrics with wandb.log.

# Set up model and data
model, dataloader = get_model(), get_data()

for epoch in range(wandb.config.epochs):
    for batch in dataloader:
        loss, accuracy = model.training_step()
        #  3. Log metrics inside your training loop to visualize
        # model performance
        wandb.log({"accuracy": accuracy, "loss": loss})

For more information on different data types you can log with W&B, see Log Data During Experiments.

Log an artifact to W&B

Optionally log a W&B Artifact. Artifacts make it easy to version datasets and models.

wandb.log_artifact(model)

For more information about Artifacts, see the Artifacts Chapter. For more information about versioning models, see Model Management.

Putting it all together

The full script with the preceding code snippets is found below:

# Import the W&B Python Library
import wandb

# 1. Start a W&B Run
run = wandb.init(project="cat-classification", notes="", tags=["baseline", "paper1"])

#  2. Capture a dictionary of hyperparameters
wandb.config = {"epochs": 100, "learning_rate": 0.001, "batch_size": 128}

# Set up model and data
model, dataloader = get_model(), get_data()

for epoch in range(wandb.config.epochs):
    for batch in dataloader:
        loss, accuracy = model.training_step()
        #  3. Log metrics inside your training loop to visualize
        # model performance
        wandb.log({"accuracy": accuracy, "loss": loss})

# 4. Log an artifact to W&B
wandb.log_artifact(model)

# Optional: save model at the end
model.to_onnx()
wandb.save("model.onnx")

Next steps: Visualize your experiment

Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like parallel coordinates plots, parameter importance analyzes, and more.

Quickstart Sweeps Dashboard example

For more information on how to view experiments and specific runs, see Visualize results from experiments.

Best practices

The following are some suggested guidelines to consider when you create experiments:

  1. Config: Track hyperparameters, architecture, dataset, and anything else you’d like to use to reproduce your model. These will show up in columns— use config columns to group, sort, and filter runs dynamically in the app.
  2. Project: A project is a set of experiments you can compare together. Each project gets a dedicated dashboard page, and you can easily turn on and off different groups of runs to compare different model versions.
  3. Notes: Set a quick commit message directly from your script. Edit and access notes in the Overview section of a run in the W&B App.
  4. Tags: Identify baseline runs and favorite runs. You can filter runs using tags. You can edit tags at a later time on the Overview section of your project’s dashboard on the W&B App.

The following code snippet demonstrates how to define a W&B Experiment using the best practices listed above:

import wandb

config = dict(
    learning_rate=0.01, momentum=0.2, architecture="CNN", dataset_id="cats-0192"
)

wandb.init(
    project="detect-cats",
    notes="tweak baseline",
    tags=["baseline", "paper1"],
    config=config,
)

For more information about available parameters when defining a W&B Experiment, see the wandb.init API docs in the API Reference Guide.

1.2 - Configure experiments

Use a dictionary-like object to save your experiment configuration

Use the wandb.config object to save your training configuration such as:

  • hyperparameter
  • input settings such as the dataset name or model type
  • any other independent variables for your experiments.

The wandb.config attribute makes it easy to analyze your experiments and reproduce your work in the future. You can group by configuration values in the W&B App, compare the settings of different W&B Runs and view how different training configurations affect the output. A Run’s config attribute is a dictionary-like object, and it can be built from lots of dictionary-like objects.

Set up an experiment configuration

Configurations are typically defined in the beginning of a training script. Machine learning workflows may vary, however, so you are not required to define a configuration at the beginning of your training script.

The following sections outline different common scenarios of how to define your experiments configuration.

Set the configuration at initialization

Pass a dictionary at the beginning of your script when you call the wandb.init() API to generate a background process to sync and log data as a W&B Run.

The proceeding code snippet demonstrates how to define a Python dictionary with configuration values and how to pass that dictionary as an argument when you initialize a W&B Run.

import wandb

# Define a config dictionary object
config = {
    "hidden_layer_sizes": [32, 64],
    "kernel_sizes": [3],
    "activation": "ReLU",
    "pool_sizes": [2],
    "dropout": 0.5,
    "num_classes": 10,
}

# Pass the config dictionary when you initialize W&B
run = wandb.init(project="config_example", config=config)

Access the values from the dictionary similarly to how you access other dictionaries in Python:

# Access values with the key as the index value
hidden_layer_sizes = wandb.config["hidden_layer_sizes"]
kernel_sizes = wandb.config["kernel_sizes"]
activation = wandb.config["activation"]

# Python dictionary get() method
hidden_layer_sizes = wandb.config.get("hidden_layer_sizes")
kernel_sizes = wandb.config.get("kernel_sizes")
activation = wandb.config.get("activation")

Set the configuration with argparse

You can set your configuration with an argparse object. argparse, short for argument parser, is a standard library module in Python 3.2 and above that makes it easy to write scripts that take advantage of all the flexibility and power of command line arguments.

This is useful for tracking results from scripts that are launched from the command line.

The proceeding Python script demonstrates how to define a parser object to define and set your experiment config. The functions train_one_epoch and evaluate_one_epoch are provided to simulate a training loop for the purpose of this demonstration:

# config_experiment.py
import wandb
import argparse
import numpy as np
import random


# Training and evaluation demo code
def train_one_epoch(epoch, lr, bs):
    acc = 0.25 + ((epoch / 30) + (random.random() / 10))
    loss = 0.2 + (1 - ((epoch - 1) / 10 + random.random() / 5))
    return acc, loss


def evaluate_one_epoch(epoch):
    acc = 0.1 + ((epoch / 20) + (random.random() / 10))
    loss = 0.25 + (1 - ((epoch - 1) / 10 + random.random() / 6))
    return acc, loss


def main(args):
    # Start a W&B Run
    run = wandb.init(project="config_example", config=args)

    # Access values from config dictionary and store them
    # into variables for readability
    lr = wandb.config["learning_rate"]
    bs = wandb.config["batch_size"]
    epochs = wandb.config["epochs"]

    # Simulate training and logging values to W&B
    for epoch in np.arange(1, epochs):
        train_acc, train_loss = train_one_epoch(epoch, lr, bs)
        val_acc, val_loss = evaluate_one_epoch(epoch)

        wandb.log(
            {
                "epoch": epoch,
                "train_acc": train_acc,
                "train_loss": train_loss,
                "val_acc": val_acc,
                "val_loss": val_loss,
            }
        )


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        formatter_class=argparse.ArgumentDefaultsHelpFormatter
    )

    parser.add_argument("-b", "--batch_size", type=int, default=32, help="Batch size")
    parser.add_argument(
        "-e", "--epochs", type=int, default=50, help="Number of training epochs"
    )
    parser.add_argument(
        "-lr", "--learning_rate", type=int, default=0.001, help="Learning rate"
    )

    args = parser.parse_args()
    main(args)

Set the configuration throughout your script

You can add more parameters to your config object throughout your script. The proceeding code snippet demonstrates how to add new key-value pairs to your config object:

import wandb

# Define a config dictionary object
config = {
    "hidden_layer_sizes": [32, 64],
    "kernel_sizes": [3],
    "activation": "ReLU",
    "pool_sizes": [2],
    "dropout": 0.5,
    "num_classes": 10,
}

# Pass the config dictionary when you initialize W&B
run = wandb.init(project="config_example", config=config)

# Update config after you initialize W&B
wandb.config["dropout"] = 0.2
wandb.config.epochs = 4
wandb.config["batch_size"] = 32

You can update multiple values at a time:

wandb.init(config={"epochs": 4, "batch_size": 32})
# later
wandb.config.update({"lr": 0.1, "channels": 16})

Set the configuration after your Run has finished

Use the W&B Public API to update your config (or anything else about from a complete Run) after your Run. This is particularly useful if you forgot to log a value during a Run.

Provide your entity, project name, and the Run ID to update your configuration after a Run has finished. Find these values directly from the Run object itself wandb.run or from the W&B App UI:

api = wandb.Api()

# Access attributes directly from the run object
# or from the W&B App
username = wandb.run.entity
project = wandb.run.project
run_id = wandb.run.id

run = api.run(f"{username}/{project}/{run_id}")
run.config["bar"] = 32
run.update()

absl.FLAGS

You can also pass in absl flags.

flags.DEFINE_string("model", None, "model to run")  # name, default, help

wandb.config.update(flags.FLAGS)  # adds absl flags to config

File-Based Configs

If you place a file named config-defaults.yaml in the same directory as your run script, the run automatically picks up the key-value pairs defined in the file and passes them to wandb.config.

The following code snippet shows a sample config-defaults.yaml YAML file:

# config-defaults.yaml
	@@ -224,9 +225,21 @@ batch_size:
  desc: Size of each mini-batch
  value: 32

You can override the default values automatically loaded from config-defaults.yaml by setting updated values in the config argument of wandb.init. For example:

import wandb
# Override config-defaults.yaml by passing custom values
wandb.init(config={"epochs": 200, "batch_size": 64})

To load a configuration file other than config-defaults.yaml, use the --configs command-line argument and specify the path to the file:

python train.py --configs other-config.yaml

Example use case for file-based configs

Suppose you have a YAML file with some metadata for the run, and then a dictionary of hyperparameters in your Python script. You can save both in the nested config object:

hyperparameter_defaults = dict(
    dropout=0.5,
    batch_size=100,
    learning_rate=0.001,
)

config_dictionary = dict(
    yaml=my_yaml_file,
    params=hyperparameter_defaults,
)

wandb.init(config=config_dictionary)

TensorFlow v1 flags

You can pass TensorFlow flags into the wandb.config object directly.

wandb.init()
wandb.config.epochs = 4

flags = tf.app.flags
flags.DEFINE_string("data_dir", "/tmp/data")
flags.DEFINE_integer("batch_size", 128, "Batch size.")
wandb.config.update(flags.FLAGS)  # add tensorflow flags as config

1.3 - Projects

Compare versions of your model, explore results in a scratch workspace, and export findings to a report to save notes and visualizations

A project is a central location where you visualize results, compare experiments, view and download artifacts, create an automation, and more.

Each project contains the proceeding which you can access from the sidebar:

  • Overview: snapshot of your project
  • Workspace: personal visualization sandbox
  • Runs: A table that lists all the runs in your project
  • Automations: Automations configured in your project
  • Sweeps: automated exploration and optimization
  • Reports: saved snapshots of notes, runs, and graphs
  • Artifacts: Contains all runs and the artifacts associated with that run

Overview tab

  • Project name: The name of the project. W&B creates a project for you when you initialize a run with the name you provide for the project field. You can change the name of the project at any time by selecting the Edit button in the upper right corner.
  • Description: A description of the project.
  • Project visibility: The visibility of the project. The visibility setting that determines who can access it. See Project visibility for more information.
  • Last active: Timestamp of the last time data is logged to this project
  • Owner: The entity that owns this project
  • Contributors: The number of users that contribute to this project
  • Total runs: The total number of runs in this project
  • Total compute: we add up all the run times in your project to get this total
  • Undelete runs: Click the dropdown menu and click “Undelete all runs” to recover deleted runs in your project.
  • Delete project: click the dot menu in the right corner to delete a project

View a live example

Workspace tab

A project’s workspace gives you a personal sandbox to compare experiments. Use projects to organize models that can be compared, working on the same problem with different architectures, hyperparameters, datasets, preprocessing etc.

Runs Sidebar: list of all the runs in your project

  • Dot menu: hover over a row in the sidebar to see the menu appear on the left side. Use this menu to rename a run, delete a run, or stop and active run.
  • Visibility icon: click the eye to turn on and off runs on graphs
  • Color: change the run color to another one of our presets or a custom color
  • Search: search runs by name. This also filters visible runs in the plots.
  • Filter: use the sidebar filter to narrow down the set of runs visible
  • Group: select a config column to dynamically group your runs, for example by architecture. Grouping makes plots show up with a line along the mean value, and a shaded region for the variance of points on the graph.
  • Sort: pick a value to sort your runs by, for example runs with the lowest loss or highest accuracy. Sorting will affect which runs show up on the graphs.
  • Expand button: expand the sidebar into the full table
  • Run count: the number in parentheses at the top is the total number of runs in the project. The number (N visualized) is the number of runs that have the eye turned on and are available to be visualized in each plot. In the example below, the graphs are only showing the first 10 of 183 runs. Edit a graph to increase the max number of runs visible.

Panels layout: use this scratch space to explore results, add and remove charts, and compare versions of your models based on different metrics

View a live example

Add a section of panels

Click the section dropdown menu and click “Add section” to create a new section for panels. You can rename sections, drag them to reorganize them, and expand and collapse sections.

Each section has options in the upper right corner:

  • Switch to custom layout: The custom layout allows you to resize panels individually.
  • Switch to standard layout: The standard layout lets you resize all panels in the section at once, and gives you pagination.
  • Add section: Add a section above or below from the dropdown menu, or click the button at the bottom of the page to add a new section.
  • Rename section: Change the title for your section.
  • Export section to report: Save this section of panels to a new report.
  • Delete section: Remove the whole section and all the charts. This can be undone with the undo button at the bottom of the page in the workspace bar.
  • Add panel: Click the plus button to add a panel to the section.

Move panels between sections

Drag and drop panels to reorder and organize into sections. You can also click the “Move” button in the upper right corner of a panel to select a section to move the panel to.

Resize panels

  • Standard layout: All panels maintain the same size, and there are pages of panels. You can resize the panels by clicking and dragging the lower right corner. Resize the section by clicking and dragging the lower right corner of the section.
  • Custom layout: All panels are sized individually, and there are no pages.

Search for metrics

Use the search box in the workspace to filter down the panels. This search matches the panel titles, which are by default the name of the metrics visualized.

Runs tab

Use the runs tab to filter, group, and sort your results.

The proceeding tabs demonstrate some common actions you can take in the runs tab.

Sort all rows in a Table by the value in a given column.

  1. Hover your mouse over the column title. A kebob menu will appear (three vertical docs).
  2. Select on the kebob menu (three vertical dots).
  3. Choose Sort Asc or Sort Desc to sort the rows in ascending or descending order, respectively.
See the digits for which the model most confidently guessed '0'.

The preceding image demonstrates how to view sorting options for a Table column called val_acc.

Filter all rows by an expression with the Filter button on the top left of the dashboard.

See only examples which the model gets wrong.

Select Add filter to add one or more filters to your rows. Three dropdown menus will appear. From left to right the filter types are based on: Column name, Operator , and Values

Column name Binary relation Value
Accepted values String =, ≠, ≤, ≥, IN, NOT IN, Integer, float, string, timestamp, null

The expression editor shows a list of options for each term using autocomplete on column names and logical predicate structure. You can connect multiple logical predicates into one expression using “and” or “or” (and sometimes parentheses).

The preceding image shows a filter that is based on the `val_loss` column. The filter shows runs with a validation loss less than or equal to 1.

Group all rows by the value in a particular column with the Group by button in a column header.

The truth distribution shows small errors: 8s and 2s are confused for 7s and 9s for 2s.

By default, this turns other numeric columns into histograms showing the distribution of values for that column across the group. Grouping is helpful for understanding higher-level patterns in your data.

Reports tab

See all the snapshots of results in one place, and share findings with your team.

Sweeps tab

Start a new sweep from your project.

Artifacts tab

View all the artifacts associated with a project, from training datasets and fine-tuned models to tables of metrics and media.

Overview panel

On the overview panel, you’ll find a variety of high-level information about the artifact, including its name and version, the hash digest used to detect changes and prevent duplication, the creation date, and any aliases. You can add or remove aliases here, take notes on both the version as well as the artifact as a whole.

Metadata panel

The metadata panel provides access to the artifact’s metadata, which is provided when the artifact is constructed. This metadata might include configuration arguments required to reconstruct the artifact, URLs where more information can be found, or metrics produced during the run which logged the artifact. Additionally, you can see the configuration for the run which produced the artifact as well as the history metrics at the time of logging the artifact.

Usage panel

The Usage panel provides a code snippet for downloading the artifact for use outside of the web app, for example on a local machine. This section also indicates and links to the run which output the artifact and any runs which use the artifact as an input.

Files panel

The files panel lists the files and folders associated with the artifact. W&B uploads certain files for a run automatically. For example, requirements.txt shows the versions of each library the run used, and wandb-metadata.json, and wandb-summary.json include information about the run. Other files may be uploaded, such as artifacts or media, depending on the run’s configuration. You can navigate through this file tree and view the contents directly in the W&B web app.

Tables associated with artifacts are particularly rich and interactive in this context. Learn more about using Tables with Artifacts here.

Lineage panel

The lineage panel provides a view of all of the artifacts associated with a project and the runs that connect them to each other. It shows run types as blocks and artifacts as circles, with arrows to indicate when a run of a given type consumes or produces an artifact of a given type. The type of the particular artifact selected in the left-hand column is highlighted.

Click the Explode toggle to view all of the individual artifact versions and the specific runs that connect them.

Action History Audit tab

The action history audit tab shows all of the alias actions and membership changes for a Collection so you can audit the entire evolution of the resource.

Versions tab

The versions tab shows all versions of the artifact as well as columns for each of the numeric values of the Run History at the time of logging the version. This allows you to compare performance and quickly identify versions of interest.

Star a project

Add a star to a project to mark that project as important. Projects that you and your team mark as important with stars appear at the top of your organization’s home page.

For example, the proceeding image shows two projects that are marked as important, the zoo_experiment and registry_demo. Both projects appear within the top of the organization’s home page within the Starred projects section.

There are two ways to mark a project as important: within a project’s overview tab or within your team’s profile page.

  1. Navigate to your W&B project on the W&B App at https://wandb.ai/<team>/<project-name>.
  2. Select the Overview tab from the project sidebar.
  3. Choose the star icon in the upper right corner next to the Edit button.
  1. Navigate to your team’s profile page at https://wandb.ai/<team>/projects.
  2. Select the Projects tab.
  3. Hover your mouse next to the project you want to star. Click on star icon that appears.

For example, the proceeding image shows the star icon next to the “Compare_Zoo_Models” project.

Confirm that your project appears on the landing page of your organization by clicking on the organization name in the top left corner of the app.

Delete a project

You can delete your project by clicking the three dots on the right of the overview tab.

If the project is empty, you can delete it by clicking the dropdown menu in the top-right and selecting Delete project.

Add notes to a project

Add notes to your project either as a description overview or as a markdown panel within your workspace.

Add description overview to a project

Descriptions you add to your page appear in the Overview tab of your profile.

  1. Navigate to your W&B project
  2. Select the Overview tab from the project sidebar
  3. Choose Edit in the upper right hand corner
  4. Add your notes in the Description field
  5. Select the Save button

Add notes to run workspace

  1. Navigate to your W&B project
  2. Select the Workspace tab from the project sidebar
  3. Choose the Add panels button from the top right corner
  4. Select the TEXT AND CODE dropdown from the modal that appears
  5. Select Markdown
  6. Add your notes in the markdown panel that appears in your workspace

1.4 - View experiments results

A playground for exploring run data with interactive visualizations

W&B workspace is your personal sandbox to customize charts and explore model results. A W&B workspace consists of Tables and Panel sections:

  • Tables: All runs logged to your project are listed in the project’s table. Turn on and off runs, change colors, and expand the table to see notes, config, and summary metrics for each run.
  • Panel sections: A section that contains one or more panels. Create new panels, organize them, and export to reports to save snapshots of your workspace.

Workspace types

There are two main workspace categories: Personal workspaces and Saved views.

  • Personal workspaces: A customizable workspace for in-depth analysis of models and data visualizations. Only the owner of the workspace can edit and save changes. Teammates can view a personal workspace but teammates can not make changes to someone else’s personal workspace.
  • Saved views: Saved views are collaborative snapshots of a workspace. Anyone on your team can view, edit, and save changes to saved workspace views. Use saved workspace views for reviewing and discussing experiments, runs, and more.

The proceeding image shows multiple personal workspaces created by Cécile-parker’s teammates. In this project, there are no saved views:

Saved workspace views

Improve team collaboration with tailored workspace views. Create Saved Views to organize your preferred setup of charts and data.

Create a new saved workspace view

  1. Navigate to a personal workspace or a saved view.
  2. Make edits to the workspace.
  3. Click on the meatball menu (three horizontal dots) at the top right corner of your workspace. Click on Save as a new view.

New saved views appear in the workspace navigation menu.

Update a saved workspace view

Saved changes overwrite the previous state of the saved view. Unsaved changes are not retained. To update a saved workspace view in W&B:

  1. Navigate to a saved view.
  2. Make the desired changes to your charts and data within the workspace.
  3. Click the Save button to confirm your changes.

Delete a saved workspace view

Remove saved views that are no longer needed.

  1. Navigate to the saved view you want to remove.
  2. Select the three horizontal lines () at the top right of the view.
  3. Choose Delete view.
  4. Confirm the deletion to remove the view from your workspace menu.

Share a workspace view

Share your customized workspace with your team by sharing the workspace URL directly. All users with access to the workspace project can see the saved Views of that workspace.

Programmatically creating workspaces

wandb-workspaces is a Python library for programmatically working with W&B workspaces and reports.

Define a workspace programmatically with wandb-workspaces. wandb-workspaces is a Python library for programmatically working with W&B workspaces and reports.

You can define the workspace’s properties, such as:

  • Set panel layouts, colors, and section orders.
  • Configure workspace settings like default x-axis, section order, and collapse states.
  • Add and customize panels within sections to organize workspace views.
  • Load and modify existing workspaces using a URL.
  • Save changes to existing workspaces or save as new views.
  • Filter, group, and sort runs programmatically using simple expressions.
  • Customize run appearance with settings like colors and visibility.
  • Copy views from one workspace to another for integration and reuse.

Install Workspace API

In addition to wandb, ensure that you install wandb-workspaces:

pip install wandb wandb-workspaces

Define and save a workspace view programmatically

import wandb_workspaces.reports.v2 as wr

workspace = ws.Workspace(entity="your-entity", project="your-project", views=[...])
workspace.save()

Edit an existing view

existing_workspace = ws.Workspace.from_url("workspace-url")
existing_workspace.views[0] = ws.View(name="my-new-view", sections=[...])
existing_workspace.save()

Copy a workspace saved view to another workspace

old_workspace = ws.Workspace.from_url("old-workspace-url")
old_workspace_view = old_workspace.views[0]
new_workspace = ws.Workspace(entity="new-entity", project="new-project", views=[old_workspace_view])

new_workspace.save()

See wandb-workspace examples for comprehensive workspace API examples. For an end to end tutorial, see Programmatic Workspaces tutorial.

1.5 - What are runs?

Learn about the basic building block of W&B, Runs.

A run is a single unit of computation logged by W&B. You can think of a W&B run as an atomic element of your whole project. In other words, each run is a record of a specific computation, such as training a model and logging the results, hyperparameter sweeps, and so forth.

Common patterns for initiating a run include, but are not limited to:

  • Training a model
  • Changing a hyperparameter and conducting a new experiment
  • Conducting a new machine learning experiment with a different model
  • Logging data or a model as a W&B Artifact
  • Downloading a W&B Artifact

W&B stores runs that you create into projects. You can view runs and their properties within the run’s project workspace on the W&B App UI. You can also programmatically access run properties with the wandb.Api.Run object.

Anything you log with run.log is recorded in that run. Consider the proceeding code snippet.

import wandb

run = wandb.init(entity="nico", project="awesome-project")
run.log({"accuracy": 0.9, "loss": 0.1})

The first line imports the W&B Python SDK. The second line initializes a run in the project awesome-project under the entity nico. The third line logs the accuracy and loss of the model to that run.

Within the terminal, W&B returns:

wandb: Syncing run earnest-sunset-1
wandb: ⭐️ View project at https://wandb.ai/nico/awesome-project
wandb: 🚀 View run at https://wandb.ai/nico/awesome-project/runs/1jx1ud12
wandb:                                                                                
wandb: 
wandb: Run history:
wandb: accuracy ▁
wandb:     loss ▁
wandb: 
wandb: Run summary:
wandb: accuracy 0.9
wandb:     loss 0.5
wandb: 
wandb: 🚀 View run earnest-sunset-1 at: https://wandb.ai/nico/awesome-project/runs/1jx1ud12
wandb: ⭐️ View project at: https://wandb.ai/nico/awesome-project
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20241105_111006-1jx1ud12/logs

The URL W&B returns in the terminal to redirects you to the run’s workspace in the W&B App UI. Note that the panels generated in the workspace corresponds to the single point.

Logging a metrics at a single point of time might not be that useful. A more realistic example in the case of training discriminative models is to log metrics at regular intervals. For example, consider the proceeding code snippet:

epochs = 10
lr = 0.01

run = wandb.init(
    entity="nico",
    project="awesome-project",
    config={
        "learning_rate": lr,
        "epochs": epochs,
    },
)

offset = random.random() / 5

# simulating a training run
for epoch in range(epochs):
    acc = 1 - 2**-epoch - random.random() / (epoch + 1) - offset
    loss = 2**-epoch + random.random() / (epoch + 1) + offset
    print(f"epoch={epoch}, accuracy={acc}, loss={loss}")
    run.log({"accuracy": acc, "loss": loss})

This returns the following output:

wandb: Syncing run jolly-haze-4
wandb: ⭐️ View project at https://wandb.ai/nico/awesome-project
wandb: 🚀 View run at https://wandb.ai/nico/awesome-project/runs/pdo5110r
lr: 0.01
epoch=0, accuracy=-0.10070974957523078, loss=1.985328507123956
epoch=1, accuracy=0.2884687745057535, loss=0.7374362314407752
epoch=2, accuracy=0.7347387967382066, loss=0.4402409835486663
epoch=3, accuracy=0.7667969248039795, loss=0.26176963846423457
epoch=4, accuracy=0.7446848791003173, loss=0.24808611724405083
epoch=5, accuracy=0.8035095836268268, loss=0.16169791827329466
epoch=6, accuracy=0.861349032371624, loss=0.03432578493587426
epoch=7, accuracy=0.8794926436276016, loss=0.10331872172219471
epoch=8, accuracy=0.9424839917077272, loss=0.07767793473500445
epoch=9, accuracy=0.9584880427028566, loss=0.10531971149250456
wandb: 🚀 View run jolly-haze-4 at: https://wandb.ai/nico/awesome-project/runs/pdo5110r
wandb: Find logs at: wandb/run-20241105_111816-pdo5110r/logs

The training script calls run.log 10 times. Each time the script calls run.log, W&B logs the accuracy and loss for that epoch. Selecting the URL that W&B prints from the preceding output, directs you to the run’s workspace in the W&B App UI.

Note that W&B captures the simulated training loop within a single run called jolly-haze-4. This is because the script calls wandb.init method only once.

As another example, during a sweep, W&B explores a hyperparameter search space that you specify. W&B implements each new hyperparameter combination that the sweep creates as a unique run.

Initialize a run

Initialize a W&B run with wandb.init(). The proceeding code snippet shows how to import the W&B Python SDK and initialize a run.

Ensure to replace values enclosed in angle brackets (< >) with your own values:

import wandb

run = wandb.init(entity="<entity>", project="<project>")

When you initialize a run, W&B logs your run to the project you specify for the project field (wandb.init(project="<project>"). W&B creates a new project if the project does not already exist. If the project already exists, W&B stores the run in that project.

Each run in W&B has a unique identifier known as a run ID. You can specify a unique ID or let W&B randomly generate one for you.

Each run also has a human-readable, non-unique identifier known as a run name. You can specify a name for your run or let W&B randomly generate one for you.

For example, consider the proceeding code snippet:

import wandb

run = wandb.init(entity="wandbee", project="awesome-project")

The code snippet produces the proceeding output:

🚀 View run exalted-darkness-6 at: 
https://wandb.ai/nico/awesome-project/runs/pgbn9y21
Find logs at: wandb/run-20241106_090747-pgbn9y21/logs

Since the preceding code did not specify an argument for the id parameter, W&B creates a unique run ID. Where nico is the entity that logged the run, awesome-project is the name of the project the run is logged to, exalted-darkness-6 is the name of the run, and pgbn9y21 is the run ID.

Each run has a state that describes the current status of the run. See Run states for a full list of possible run states.

Run states

The proceeding table describes the possible states a run can be in:

State Description
Finished Run ended and fully synced data, or called wandb.finish()
Failed Run ended with a non-zero exit status
Crashed Run stopped sending heartbeats in the internal process, which can happen if the machine crashes
Running Run is still running and has recently sent a heartbeat

Unique run identifiers

Run IDs are unique identifiers for runs. By default, W&B generates a random and unique run ID for you when you initialize a new run. You can also specify your own unique run ID when you initialize a run.

Autogenerated run IDs

If you do not specify a run ID when you initialize a run, W&B generates a random run ID for you. You can find the unique ID of a run in the W&B App UI.

  1. Navigate to the W&B App UI at https://wandb.ai/home.
  2. Navigate to the W&B project you specified when you initialized the run.
  3. Within your project’s workspace, select the Runs tab.
  4. Select the Overview tab.

W&B displays the unique run ID in the Run path field. The run path consists of the name of your team, the name of the project, and the run ID. The unique ID is the last part of the run path.

For example, in the proceeding image, the unique run ID is 9mxi1arc:

Custom run IDs

You can specify your own run ID by passing the id parameter to the wandb.init method.

import wandb

run = wandb.init(entity="<project>", project="<project>", id="<run-id>")

You can use a run’s unique ID to directly navigate to the run’s overview page in the W&B App UI. The proceeding cell shows the URL path for a specific run:

https://wandb.ai/<entity>/<project>/<run-id>

Where values enclosed in angle brackets (< >) are placeholders for the actual values of the entity, project, and run ID.

Name your run

The name of a run is a human-readable, non-unique identifier.

By default, W&B generates a random run name when you initialize a new run. The name of a run appears within your project’s workspace and at the top of the run’s overview page.

You can specify a name for your run by passing the name parameter to the wandb.init method.

import wandb

run = wandb.init(entity="<project>", project="<project>", name="<run-name>")

Add a note to a run

Notes that you add to a specific run appear on the run page in the Overview tab and in the table of runs on the project page.

  1. Navigate to your W&B project
  2. Select the Workspace tab from the project sidebar
  3. Select the run you want to add a note to from the run selector
  4. Choose the Overview tab
  5. Select the pencil icon next to the Description field and add your notes

Stop a run

Stop a run from the W&B App or programmatically.

  1. Navigate to the terminal or code editor where you initialized the run.
  2. Press Ctrl+D to stop the run.

For example, following the preceding instructions, your terminal might looks similar to the following:

KeyboardInterrupt
wandb: 🚀 View run legendary-meadow-2 at: https://wandb.ai/nico/history-blaster-4/runs/o8sdbztv
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 1 other file(s)
wandb: Find logs at: ./wandb/run-20241106_095857-o8sdbztv/logs

Navigate to the W&B App UI to confirm the run is no longer active:

  1. Navigate to the project that your run was logging to.
  2. Select the name of the run.
3. Choose the **Overview** tab from the project sidebar.

Next to the State field, the run’s state changes from running to Killed.

  1. Navigate to the project that your run is logging to.
  2. Select the run you want to stop within the run selector.
  3. Choose the Overview tab from the project sidebar.
  4. Select the top button next to the State field.

Next to the State field, the run’s state changes from running to Killed.

See State fields for a full list of possible run states.

View logged runs

View a information about a specific run such as the state of the run, artifacts logged to the run, log files recorded during the run, and more.

To view a specific run:

  1. Navigate to the W&B App UI at https://wandb.ai/home.

  2. Navigate to the W&B project you specified when you initialized the run.

  3. Within the project sidebar, select the Workspace tab.

  4. Within the run selector, click the run you want to view, or enter a partial run name to filter for matching runs.

    By default, long run names are truncated in the middle for readability. To truncate run names at the beginning or end instead, click the action ... menu at the top of the list of runs, then set Run name cropping to crop the end, middle, or beginning.

Note that the URL path of a specific run has the proceeding format:

https://wandb.ai/<team-name>/<project-name>/runs/<run-id>

Where values enclosed in angle brackets (< >) are placeholders for the actual values of the team name, project name, and run ID.

Overview tab

Use the Overview tab to learn about specific run information in a project, such as:

  • Author: The W&B entity that creates the run.
  • Command: The command that initializes the run.
  • Description: A description of the run that you provided. This field is empty if you do not specify a description when you create the run. You can add a description to a run with the W&B App UI or programmatically with the Python SDK.
  • Duration: The amount of time the run is actively computing or logging data, excluding any pauses or waiting.
  • Git repository: The git repository associated with the run. You must enable git to view this field.
  • Host name: Where W&B computes the run. W&B displays the name of your machine if you initialize the run locally on your machine.
  • Name: The name of the run.
  • OS: Operating system that initializes the run.
  • Python executable: The command that starts the run.
  • Python version: Specifies the Python version that creates the run.
  • Run path: Identifies the unique run identifier in the form entity/project/run-ID.
  • Runtime: Measures the total time from the start to the end of the run. It’s the wall-clock time for the run. Runtime includes any time where the run is paused or waiting for resources, while duration does not.
  • Start time: The timestamp when you initialize the run.
  • State: The state of the run.
  • System hardware: The hardware W&B uses to compute the run.
  • Tags: A list of strings. Tags are useful for organizing related runs together or applying temporary labels like baseline or production.
  • W&B CLI version: The W&B CLI version installed on the machine that hosted the run command.

W&B stores the proceeding information below the overview section:

  • Artifact Outputs: Artifact outputs produced by the run.
  • Config: List of config parameters saved with wandb.config.
  • Summary: List of summary parameters saved with wandb.log(). By default, W&B sets this value to the last value logged.
W&B Dashboard run overview tab

View an example project overview here.

Workspace tab

Use the Workspace tab to view, search, group, and arrange visualizations such as autogenerated and custom plots, system metrics, and more.

View an example project workspace here

System tab

The System tab shows system metrics tracked for a specific run such as CPU utilization, system memory, disk I/O, network traffic, GPU utilization and more.

For a full list of system metrics W&B tracks, see System metrics.

View an example system tab here.

Logs tab

The Log tab shows output printed on the command line such as the standard output (stdout) and standard error (stderr).

Choose the Download button in the upper right hand corner to download the log file.

View an example logs tab here.

Files tab

Use the Files tab to view files associated with a specific run such as model checkpoints, validation set examples, and more

View an example files tab here.

Artifacts tab

The Artifacts tab lists the input and output artifacts for the specified run.

View an example artifacts tab here.

Delete runs

Delete one or more runs from a project with the W&B App.

  1. Navigate to the project that contains the runs you want to delete.
  2. Select the Runs tab from the project sidebar.
  3. Select the checkbox next to the runs you want to delete.
  4. Choose the Delete button (trash can icon) above the table.
  5. From the modal that appears, choose Delete.

1.5.1 - Add labels to runs with tags

Add tags to label runs with particular features that might not be obvious from the logged metrics or artifact data.

For example, you can add a tag to a run to indicated that run’s model is in_production, that run is preemptible, this run represents the baseline, and so forth.

Add tags to one or more runs

Programmatically or interactively add tags to your runs.

Based on your use case, select the tab below that best fits your needs:

You can add tags to a run when it is created:

import wandb

run = wandb.init(
  entity="entity",
  project="<project-name>",
  tags=["tag1", "tag2"]
)

You can also update the tags after you initialize a run. For example, the proceeding code snippet shows how to update a tag if a particular metrics crosses a pre-defined threshold:

import wandb

run = wandb.init(
  entity="entity", 
  project="capsules", 
  tags=["debug"]
  )

# python logic to train model

if current_loss < threshold:
    run.tags = run.tags + ("release_candidate",)

After you create a run, you can update tags using the Public API. For example:

run = wandb.Api().run("{entity}/{project}/{run-id}")
run.tags.append("tag1")  # you can choose tags based on run data here
run.update()

This method is best suited to tagging large numbers of runs with the same tag or tags.

  1. Navigate to your project workspace.
  2. Select Runs in the from the project sidebar.
  3. Select one or more runs from the table.
  4. Once you select one or more runs, select the Tag button above the table.
  5. Type the tag you want to add and select the Create new tag checkbox to add the tag.

This method is best suited to applying a tag or tags to a single run manually.

  1. Navigate to your project workspace.
  2. Select a run from the list of runs within your project’s workspace.
  3. Select Overview from the project sidebar.
  4. Select the gray plus icon (+) button next to Tags.
  5. Type a tag you want to add and select Add below the text box to add a new tag.

Remove tags from one or more runs

Tags can also be removed from runs with the W&B App UI.

This method is best suited to removing tags from a large numbers of runs.

  1. In the Run sidebar of the project, select the table icon in the upper-right. This will expand the sidebar into the full runs table.
  2. Hover over a run in the table to see a checkbox on the left or look in the header row for a checkbox to select all runs.
  3. Select the checkbox to enable bulk actions.
  4. Select the runs you want to remove tags.
  5. Select the Tag button above the rows of runs.
  6. Select the checkbox next to a tag to remove it from the run.
  1. In the left sidebar of the Run page, select the top Overview tab. The tags on the run are visible here.
  2. Hover over a tag and select the “x” to remove it from the run.

1.5.2 - Filter and search runs

How to use the sidebar and table on the project page

Use your project page to gain insights from runs logged to W&B.

Filter runs

Filter runs based on their status, tags, or other properties with the filter button.

Filter runs with tags

Filter runs based on their tags with the filter button.

Filter runs with regex

If regex doesn’t provide you the desired results, you can make use of tags to filter out the runs in Runs Table. Tags can be added either on run creation or after they’re finished. Once the tags are added to a run, you can add a tag filter as shown in the gif below.

If regex doesn't provide you the desired results, you can make use of tags to filter out the runs in Runs Table

Search run names

Use regex to find runs with the regex you specify. When you type a query in the search box, that will filter down the visible runs in the graphs on the workspace as well as filtering the rows of the table.

Sort runs by minimum and maximum values

Sort the runs table by the minimum or maximum value of a logged metric. This is particularly useful if you want to view the best (or worst) recorded value.

The following steps describe how to sort the run table by a specific metric based on the minimum or maximum recorded value:

  1. Hover your mouse over the column with the metric you want to sort with.
  2. Select the kebob menu (three vertical lines).
  3. From the dropdown, select either Show min or Show max.
  4. From the same dropdown, select Sort by asc or Sort by desc to sort in ascending or descending order, respectively.

Search End Time for runs

We provide a column named End Time that logs that last heartbeat from the client process. The field is hidden by default.

Export runs table to CSV

Export the table of all your runs, hyperparameters, and summary metrics to a CSV with the download button.

1.5.3 - Fork a run

Forking a W&B run

Use fork_from when you initialize a run with wandb.init() to “fork” from an existing W&B run. When you fork from a run, W&B creates a new run using the run ID and step of the source run.

Forking a run enables you to explore different parameters or models from a specific point in an experiment without impacting the original run.

Start a forked run

To fork a run, use the fork_from argument in wandb.init() and specify the source run ID and the step from the source run to fork from:

import wandb

# Initialize a run to be forked later
original_run = wandb.init(project="your_project_name", entity="your_entity_name")
# ... perform training or logging ...
original_run.finish()

# Fork the run from a specific step
forked_run = wandb.init(
    project="your_project_name",
    entity="your_entity_name",
    fork_from=f"{original_run.id}?_step=200",
)

Using an immutable run ID

Use an immutable run ID to ensure you have a consistent and unchanging reference to a specific run. Follow these steps to obtain the immutable run ID from the user interface:

  1. Access the Overview Tab: Navigate to the Overview tab on the source run’s page.

  2. Copy the Immutable Run ID: Click on the ... menu (three dots) located in the top-right corner of the Overview tab. Select the Copy Immutable Run ID option from the dropdown menu.

By following these steps, you will have a stable and unchanging reference to the run, which can be used for forking a run.

Continue from a forked run

After initializing a forked run, you can continue logging to the new run. You can log the same metrics for continuity and introduce new metrics.

For example, the following code example shows how to first fork a run and then how to log metrics to the forked run starting from a training step of 200:

import wandb
import math

# Initialize the first run and log some metrics
run1 = wandb.init("your_project_name", entity="your_entity_name")
for i in range(300):
    run1.log({"metric": i})
run1.finish()

# Fork from the first run at a specific step and log the metric starting from step 200
run2 = wandb.init(
    "your_project_name", entity="your_entity_name", fork_from=f"{run1.id}?_step=200"
)

# Continue logging in the new run
# For the first few steps, log the metric as is from run1
# After step 250, start logging the spikey pattern
for i in range(200, 300):
    if i < 250:
        run2.log({"metric": i})  # Continue logging from run1 without spikes
    else:
        # Introduce the spikey behavior starting from step 250
        subtle_spike = i + (2 * math.sin(i / 3.0))  # Apply a subtle spikey pattern
        run2.log({"metric": subtle_spike})
    # Additionally log the new metric at all steps
    run2.log({"additional_metric": i * 1.1})
run2.finish()

1.5.4 - Group runs into experiments

Group training and evaluation runs into larger experiments

Group individual jobs into experiments by passing a unique group name to wandb.init().

Use cases

  1. Distributed training: Use grouping if your experiments are split up into different pieces with separate training and evaluation scripts that should be viewed as parts of a larger whole.
  2. Multiple processes: Group multiple smaller processes together into an experiment.
  3. K-fold cross-validation: Group together runs with different random seeds to see a larger experiment. Here’s an example of k-fold cross-validation with sweeps and grouping.

There are three ways to set grouping:

1. Set group in your script

Pass an optional group and job_type to wandb.init(). This gives you a dedicated group page for each experiment, which contains the individual runs. For example:wandb.init(group="experiment_1", job_type="eval")

2. Set a group environment variable

Use WANDB_RUN_GROUP to specify a group for your runs as an environment variable. For more on this, check our docs for Environment Variables. Group should be unique within your project and shared by all runs in the group. You can use wandb.util.generate_id() to generate a unique 8 character string to use in all your processes— for example, os.environ["WANDB_RUN_GROUP"] = "experiment-" + wandb.util.generate_id()

3. Toggle grouping in the UI

You can dynamically group by any config column. For example, if you use wandb.config to log batch size or learning rate, you can then group by those hyperparameters dynamically in the web app.

Distributed training with grouping

Suppose you set grouping in wandb.init(), we will group runs by default in the UI. You can toggle this on and off by clicking the Group button at the top of the table. Here’s an example project generated from sample code where we set grouping. You can click on each “Group” row in the sidebar to get to a dedicated group page for that experiment.

From the project page above, you can click a Group in the left sidebar to get to a dedicated page like this one:

Grouping dynamically in the UI

You can group runs by any column, for example by hyperparameter. Here’s an example of what that looks like:

  • Sidebar: Runs are grouped by the number of epochs.
  • Graphs: Each line represents the group’s mean, and the shading indicates the variance. This behavior can be changed in the graph settings.

Turn off grouping

Click the grouping button and clear group fields at any time, which returns the table and graphs to their ungrouped state.

Grouping graph settings

Click the edit button in the upper right corner of a graph and select the Advanced tab to change the line and shading. You can select the mean, minimum, or maximum value for the line in each group. For the shading, you can turn off shading, and show the min and max, the standard deviation, and the standard error.

1.5.5 - Move runs

Move runs between your projects or to a team you are a member of.

Move runs between your projects

To move runs from one project to another:

  1. Navigate to the project that contains the runs you want to move.
  2. Select the Runs tab from the project sidebar.
  3. Select the checkbox next to the runs you want to move.
  4. Choose the Move button above the table.
  5. Select the destination project from the dropdown.

Move runs to a team

Move runs to a team you are a member of:

  1. Navigate to the project that contains the runs you want to move.
  2. Select the Runs tab from the project sidebar.
  3. Select the checkbox next to the runs you want to move.
  4. Choose the Move button above the table.
  5. Select the destination team and project from the dropdown.

1.5.6 - Resume a run

Resume a paused or exited W&B Run

Specify how a run should behave in the event that run stops or crashes. To resume or enable a run to automatically resume, you will need to specify the unique run ID associated with that run for the id parameter:

run = wandb.init(entity="<entity>", \ 
        project="<project>", id="<run ID>", resume="<resume>")

Pass one of the following arguments to the resume parameter to determine how W&B should respond. In each case, W&B first checks if the run ID already exists.

Argument Description Run ID exists Run ID does not exist Use case
"must" W&B must resume run specified by the run ID. W&B resumes run with the same run ID. W&B raises an error. Resume a run that must use the same run ID.
"allow" Allow W&B to resume run if run ID exists. W&B resumes run with the same run ID. W&B initializes a new run with specified run ID. Resume a run without overriding an existing run.
"never" Never allow W&B to resume a run specified by run ID. W&B raises an error. W&B initializes a new run with specified run ID.

You can also specify resume="auto" to let W&B to automatically try to restart the run on your behalf. However, you will need to ensure that you restart your run from the same directory. See the Enable runs to automatically resume section for more information.

For all the examples below, replace values enclosed within <> with your own.

Resume a run that must use the same run ID

If a run is stopped, crashes, or fails, you can resume it using the same run ID. To do so, initialize a run and specify the following:

  • Set the resume parameter to "must" (resume="must")
  • Provide the run ID of the run that stopped or crashed

The following code snippet shows how to accomplish this with the W&B Python SDK:

run = wandb.init(entity="<entity>", \ 
        project="<project>", id="<run ID>", resume="must")

Resume a run without overriding the existing run

Resume a run that stopped or crashed without overriding the existing run. This is especially helpful if your process doesn’t exit successfully. The next time you start W&B, W&B will start logging from the last step.

Set the resume parameter to "allow" (resume="allow") when you initialize a run with W&B. Provide the run ID of the run that stopped or crashed. The following code snippet shows how to accomplish this with the W&B Python SDK:

import wandb

run = wandb.init(entity="<entity>", \ 
        project="<project>", id="<run ID>", resume="allow")

Enable runs to automatically resume

The following code snippet shows how to enable runs to automatically resume with the Python SDK or with environment variables.

The following code snippet shows how to specify a W&B run ID with the Python SDK.

Replace values enclosed within <> with your own:

run = wandb.init(entity="<entity>", \ 
        project="<project>", id="<run ID>", resume="<resume>")

The following example shows how to specify the W&B WANDB_RUN_ID variable in a bash script:

RUN_ID="$1"

WANDB_RESUME=allow WANDB_RUN_ID="$RUN_ID" python eval.py

Within your terminal, you could run the shell script along with the W&B run ID. The following code snippet passes the run ID akj172:

sh run_experiment.sh akj172 

For example, suppose you execute a python script called train.py in a directory called Users/AwesomeEmployee/Desktop/ImageClassify/training/. Within train.py, the script creates a run that enables automatic resuming. Suppose next that the training script is stopped. To resume this run, you would need to restart your train.py script within Users/AwesomeEmployee/Desktop/ImageClassify/training/ .

Resume preemptible Sweeps runs

Automatically requeue interrupted sweep runs. This is particularly useful if you run a sweep agent in a compute environment that is subject to preemption such as a SLURM job in a preemptible queue, an EC2 spot instance, or a Google Cloud preemptible VM.

Use the mark_preempting function to enable W&B to automatically requeue interrupted sweep runs. For example, the following code snippet

run = wandb.init()  # Initialize a run
run.mark_preempting()

The following table outlines how W&B handles runs based on the exit status of the a sweep run.

Status Behavior
Status code 0 Run is considered to have terminated successfully and it will not be requeued.
Nonzero status W&B automatically appends the run to a run queue associated with the sweep.
No status Run is added to the sweep run queue. Sweep agents consume runs off the run queue until the queue is empty. Once the queue is empty, the sweep queue resumes generating new runs based on the sweep search algorithm.

1.5.7 - Rewind a run

Rewind

Rewind a run

Rewind a run to correct or modify the history of a run without losing the original data. In addition, when you rewind a run, you can log new data from that point in time. W&B recomputes the summary metrics for the run you rewind based on the newly logged history. This means the following behavior:

  • History truncation: W&B truncates the history to the rewind point, allowing new data logging.
  • Summary metrics: Recomputed based on the newly logged history.
  • Configuration preservation: W&B preserves the original configurations and you can merge new configurations.

When you rewind a run, W&B resets the state of the run to the specified step, preserving the original data and maintaining a consistent run ID. This means that:

  • Run archiving: W&B archives the original runs. Runs are accessible from the Run Overview tab.
  • Artifact association: Associates artifacts with the run that produce them.
  • Immutable run IDs: Introduced for consistent forking from a precise state.
  • Copy immutable run ID: A button to copy the immutable run ID for improved run management.

Rewind a run

Use resume_from with wandb.init() to “rewind” a run’s history to a specific step. Specify the name of the run and the step you want to rewind from:

import wandb
import math

# Initialize the first run and log some metrics
# Replace with your_project_name and your_entity_name!
run1 = wandb.init(project="your_project_name", entity="your_entity_name")
for i in range(300):
    run1.log({"metric": i})
run1.finish()

# Rewind from the first run at a specific step and log the metric starting from step 200
run2 = wandb.init(project="your_project_name", entity="your_entity_name", resume_from=f"{run1.id}?_step=200")

# Continue logging in the new run
# For the first few steps, log the metric as is from run1
# After step 250, start logging the spikey pattern
for i in range(200, 300):
    if i < 250:
        run2.log({"metric": i, "step": i})  # Continue logging from run1 without spikes
    else:
        # Introduce the spikey behavior starting from step 250
        subtle_spike = i + (2 * math.sin(i / 3.0))  # Apply a subtle spikey pattern
        run2.log({"metric": subtle_spike, "step": i})
    # Additionally log the new metric at all steps
    run2.log({"additional_metric": i * 1.1, "step": i})
run2.finish()

View an archived run

After you rewind a run, you can explore archived run with the W&B App UI. Follow these steps to view archived runs:

  1. Access the Overview Tab: Navigate to the Overview tab on the run’s page. This tab provides a comprehensive view of the run’s details and history.
  2. Locate the Forked From field: Within the Overview tab, find the Forked From field. This field captures the history of the resumptions. The Forked From field includes a link to the source run, allowing you to trace back to the original run and understand the entire rewind history.

By using the Forked From field, you can effortlessly navigate the tree of archived resumptions and gain insights into the sequence and origin of each rewind.

Fork from a run that you rewind

To fork from a rewound run, use the fork_from argument in wandb.init() and specify the source run ID and the step from the source run to fork from:

import wandb

# Fork the run from a specific step
forked_run = wandb.init(
    project="your_project_name",
    entity="your_entity_name",
    fork_from=f"{rewind_run.id}?_step=500",
)

# Continue logging in the new run
for i in range(500, 1000):
    forked_run.log({"metric": i*3})
forked_run.finish()

1.5.8 - Send an alert

Send alerts, triggered from your Python code, to your Slack or email

Create alerts with Slack or email if your run crashes or with a custom trigger. For example, you can create an alert if the gradient of your training loop starts to blow up (reports NaN) or a step in your ML pipeline completes. Alerts apply to all projects where you initialize runs, including both personal and team projects.

And then see W&B Alerts messages in Slack (or your email):

How to create an alert

There are two main steps to set up an alert:

  1. Turn on Alerts in your W&B User Settings
  2. Add run.alert() to your code
  3. Confirm alert is set up properly

1. Turn on alerts in your W&B User Settings

In your User Settings:

  • Scroll to the Alerts section
  • Turn on Scriptable run alerts to receive alerts from run.alert()
  • Use Connect Slack to pick a Slack channel to post alerts. We recommend the Slackbot channel because it keeps the alerts private.
  • Email will go to the email address you used when you signed up for W&B. We recommend setting up a filter in your email so all these alerts go into a folder and don’t fill up your inbox.

You will only have to do this the first time you set up W&B Alerts, or when you’d like to modify how you receive alerts.

Alerts settings in W&B User Settings

2. Add run.alert() to your code

Add run.alert() to your code (either in a Notebook or Python script) wherever you’d like it to be triggered

import wandb

run = wandb.init()
run.alert(title="High Loss", text="Loss is increasing rapidly")

3. Check your Slack or email

Check your Slack or emails for the alert message. If you didn’t receive any, make sure you’ve got emails or Slack turned on for Scriptable Alerts in your User Settings

Example

This simple alert sends a warning when accuracy falls below a threshold. In this example, it only sends alerts at least 5 minutes apart.

import wandb
from wandb import AlertLevel

run = wandb.init()

if acc < threshold:
    run.alert(
        title="Low accuracy",
        text=f"Accuracy {acc} is below the acceptable threshold {threshold}",
        level=AlertLevel.WARN,
        wait_duration=300,
    )

How to tag or mention users

Use the at sign @ followed by the Slack user ID to tag yourself or your colleagues in either the title or the text of the alert. You can find a Slack user ID from their Slack profile page.

run.alert(title="Loss is NaN", text=f"Hey <@U1234ABCD> loss has gone to NaN")

Team alerts

Team admins can set up alerts for the team on the team settings page: wandb.ai/teams/your-team.

Team alerts apply to everyone on your team. W&B recommends using the Slackbot channel because it keeps alerts private.

Change Slack channel to send alerts to

To change what channel alerts are sent to, click Disconnect Slack and then reconnect. After you reconnect, pick a different Slack channel.

1.6 - Log objects and media

Keep track of metrics, videos, custom plots, and more

Log a dictionary of metrics, media, or custom objects to a step with the W&B Python SDK. W&B collects the key-value pairs during each step and stores them in one unified dictionary each time you log data with wandb.log(). Data logged from your script is saved locally to your machine in a directory called wandb, then synced to the W&B cloud or your private server.

Each call to wandb.log is a new step by default. W&B uses steps as the default x-axis when it creates charts and panels. You can optionally create and use a custom x-axis or capture a custom summary metric. For more information, see Customize log axes.

Automatically logged data

W&B automatically logs the following information during a W&B Experiment:

  • System metrics: CPU and GPU utilization, network, etc. These are shown in the System tab on the run page. For the GPU, these are fetched with nvidia-smi.
  • Command line: The stdout and stderr are picked up and show in the logs tab on the run page.

Turn on Code Saving in your account’s Settings page to log:

  • Git commit: Pick up the latest git commit and see it on the overview tab of the run page, as well as a diff.patch file if there are any uncommitted changes.
  • Dependencies: The requirements.txt file will be uploaded and shown on the files tab of the run page, along with any files you save to the wandb directory for the run.

What data is logged with specific W&B API calls?

With W&B, you can decide exactly what you want to log. The following lists some commonly logged objects:

  • Datasets: You have to specifically log images or other dataset samples for them to stream to W&B.
  • Plots: Use wandb.plot with wandb.log to track charts. See Log Plots for more information.
  • Tables: Use wandb.Table to log data to visualize and query with W&B. See Log Tables for more information.
  • PyTorch gradients: Add wandb.watch(model) to see gradients of the weights as histograms in the UI.
  • Configuration information: Log hyperparameters, a link to your dataset, or the name of the architecture you’re using as config parameters, passed in like this: wandb.init(config=your_config_dictionary). See the PyTorch Integrations page for more information.
  • Metrics: Use wandb.log to see metrics from your model. If you log metrics like accuracy and loss from inside your training loop, you’ll get live updating graphs in the UI.

Common workflows

  1. Compare the best accuracy: To compare the best value of a metric across runs, set the summary value for that metric. By default, summary is set to the last value you logged for each key. This is useful in the table in the UI, where you can sort and filter runs based on their summary metrics, to help compare runs in a table or bar chart based on their best accuracy, instead of final accuracy. For example: wandb.run.summary["best_accuracy"] = best_accuracy
  2. Multiple metrics on one chart: Log multiple metrics in the same call to wandb.log, like this: wandb.log({"acc'": 0.9, "loss": 0.1}) and they will both be available to plot against in the UI
  3. Custom x-axis: Add a custom x-axis to the same log call to visualize your metrics against a different axis in the W&B dashboard. For example: wandb.log({'acc': 0.9, 'epoch': 3, 'batch': 117}). To set the default x-axis for a given metric use Run.define_metric()
  4. Log rich media and charts: wandb.log supports the logging of a wide variety of data types, from media like images and videos to tables and charts.

1.6.1 - Create and track plots from experiments

Create and track plots from machine learning experiments.

Using the methods in wandb.plot, you can track charts with wandb.log, including charts that change over time during training. To learn more about our custom charting framework, check out this guide.

Basic charts

These simple charts make it easy to construct basic visualizations of metrics and results.

wandb.plot.line()

Log a custom line plot—a list of connected and ordered points on arbitrary axes.

data = [[x, y] for (x, y) in zip(x_values, y_values)]
table = wandb.Table(data=data, columns=["x", "y"])
wandb.log(
    {
        "my_custom_plot_id": wandb.plot.line(
            table, "x", "y", title="Custom Y vs X Line Plot"
        )
    }
)

You can use this to log curves on any two dimensions. If you’re plotting two lists of values against each other, the number of values in the lists must match exactly. For example, each point must have an x and a y.

See in the app

Run the code

wandb.plot.scatter()

Log a custom scatter plot—a list of points (x, y) on a pair of arbitrary axes x and y.

data = [[x, y] for (x, y) in zip(class_x_scores, class_y_scores)]
table = wandb.Table(data=data, columns=["class_x", "class_y"])
wandb.log({"my_custom_id": wandb.plot.scatter(table, "class_x", "class_y")})

You can use this to log scatter points on any two dimensions. If you’re plotting two lists of values against each other, the number of values in the lists must match exactly. For example, each point must have an x and a y.

See in the app

Run the code

wandb.plot.bar()

Log a custom bar chart—a list of labeled values as bars—natively in a few lines:

data = [[label, val] for (label, val) in zip(labels, values)]
table = wandb.Table(data=data, columns=["label", "value"])
wandb.log(
    {
        "my_bar_chart_id": wandb.plot.bar(
            table, "label", "value", title="Custom Bar Chart"
        )
    }
)

You can use this to log arbitrary bar charts. The number of labels and values in the lists must match exactly. Each data point must have both.

See in the app

Run the code

wandb.plot.histogram()

Log a custom histogram—sort a list of values into bins by count/frequency of occurrence—natively in a few lines. Let’s say I have a list of prediction confidence scores (scores) and want to visualize their distribution:

data = [[s] for s in scores]
table = wandb.Table(data=data, columns=["scores"])
wandb.log({"my_histogram": wandb.plot.histogram(table, "scores", title="Histogram")})

You can use this to log arbitrary histograms. Note that data is a list of lists, intended to support a 2D array of rows and columns.

See in the app

Run the code

wandb.plot.line_series()

Plot multiple lines, or multiple different lists of x-y coordinate pairs, on one shared set of x-y axes:

wandb.log(
    {
        "my_custom_id": wandb.plot.line_series(
            xs=[0, 1, 2, 3, 4],
            ys=[[10, 20, 30, 40, 50], [0.5, 11, 72, 3, 41]],
            keys=["metric Y", "metric Z"],
            title="Two Random Metrics",
            xname="x units",
        )
    }
)

Note that the number of x and y points must match exactly. You can supply one list of x values to match multiple lists of y values, or a separate list of x values for each list of y values.

See in the app

Model evaluation charts

These preset charts have built-in wandb.plot methods that make it quick and easy to log charts directly from your script and see the exact information you’re looking for in the UI.

wandb.plot.pr_curve()

Create a Precision-Recall curve in one line:

wandb.log({"pr": wandb.plot.pr_curve(ground_truth, predictions)})

You can log this whenever your code has access to:

  • a model’s predicted scores (predictions) on a set of examples
  • the corresponding ground truth labels (ground_truth) for those examples
  • (optionally) a list of the labels/class names (labels=["cat", "dog", "bird"...] if label index 0 means cat, 1 = dog, 2 = bird, etc.)
  • (optionally) a subset (still in list format) of the labels to visualize in the plot

See in the app

Run the code

wandb.plot.roc_curve()

Create an ROC curve in one line:

wandb.log({"roc": wandb.plot.roc_curve(ground_truth, predictions)})

You can log this whenever your code has access to:

  • a model’s predicted scores (predictions) on a set of examples
  • the corresponding ground truth labels (ground_truth) for those examples
  • (optionally) a list of the labels/ class names (labels=["cat", "dog", "bird"...] if label index 0 means cat, 1 = dog, 2 = bird, etc.)
  • (optionally) a subset (still in list format) of these labels to visualize on the plot

See in the app

Run the code

wandb.plot.confusion_matrix()

Create a multi-class confusion matrix in one line:

cm = wandb.plot.confusion_matrix(
    y_true=ground_truth, preds=predictions, class_names=class_names
)

wandb.log({"conf_mat": cm})

You can log this wherever your code has access to:

  • a model’s predicted labels on a set of examples (preds) or the normalized probability scores (probs). The probabilities must have the shape (number of examples, number of classes). You can supply either probabilities or predictions but not both.
  • the corresponding ground truth labels for those examples (y_true)
  • a full list of the labels/class names as strings of class_names. Examples: class_names=["cat", "dog", "bird"] if index 0 is cat, 1 is dog, 2 is bird.

See in the app

Run the code

Interactive custom charts

For full customization, tweak a built-in Custom Chart preset or create a new preset, then save the chart. Use the chart ID to log data to that custom preset directly from your script.

# Create a table with the columns to plot
table = wandb.Table(data=data, columns=["step", "height"])

# Map from the table's columns to the chart's fields
fields = {"x": "step", "value": "height"}

# Use the table to populate the new custom chart preset
# To use your own saved chart preset, change the vega_spec_name
# To edit the title, change the string_fields
my_custom_chart = wandb.plot_table(
    vega_spec_name="carey/new_chart",
    data_table=table,
    fields=fields,
    string_fields={"title": "Height Histogram"},
)

Run the code

Matplotlib and Plotly plots

Instead of using W&B Custom Charts with wandb.plot, you can log charts generated with matplotlib and Plotly.

import matplotlib.pyplot as plt

plt.plot([1, 2, 3, 4])
plt.ylabel("some interesting numbers")
wandb.log({"chart": plt})

Just pass a matplotlib plot or figure object to wandb.log(). By default we’ll convert the plot into a Plotly plot. If you’d rather log the plot as an image, you can pass the plot into wandb.Image. We also accept Plotly charts directly.

Log custom HTML to W&B Tables

W&B supports logging interactive charts from Plotly and Bokeh as HTML and adding them to Tables.

Log Plotly figures to Tables as HTML

You can log interactive Plotly charts to wandb Tables by converting them to HTML.

import wandb
import plotly.express as px

# Initialize a new run
run = wandb.init(project="log-plotly-fig-tables", name="plotly_html")

# Create a table
table = wandb.Table(columns=["plotly_figure"])

# Create path for Plotly figure
path_to_plotly_html = "./plotly_figure.html"

# Example Plotly figure
fig = px.scatter(x=[0, 1, 2, 3, 4], y=[0, 1, 4, 9, 16])

# Write Plotly figure to HTML
# Set auto_play to False prevents animated Plotly charts
# from playing in the table automatically
fig.write_html(path_to_plotly_html, auto_play=False)

# Add Plotly figure as HTML file into Table
table.add_data(wandb.Html(path_to_plotly_html))

# Log Table
run.log({"test_table": table})
wandb.finish()

Log Bokeh figures to Tables as HTML

You can log interactive Bokeh charts to wandb Tables by converting them to HTML.

from scipy.signal import spectrogram
import holoviews as hv
import panel as pn
from scipy.io import wavfile
import numpy as np
from bokeh.resources import INLINE

hv.extension("bokeh", logo=False)
import wandb


def save_audio_with_bokeh_plot_to_html(audio_path, html_file_name):
    sr, wav_data = wavfile.read(audio_path)
    duration = len(wav_data) / sr
    f, t, sxx = spectrogram(wav_data, sr)
    spec_gram = hv.Image((t, f, np.log10(sxx)), ["Time (s)", "Frequency (hz)"]).opts(
        width=500, height=150, labelled=[]
    )
    audio = pn.pane.Audio(wav_data, sample_rate=sr, name="Audio", throttle=500)
    slider = pn.widgets.FloatSlider(end=duration, visible=False)
    line = hv.VLine(0).opts(color="white")
    slider.jslink(audio, value="time", bidirectional=True)
    slider.jslink(line, value="glyph.location")
    combined = pn.Row(audio, spec_gram * line, slider).save(html_file_name)


html_file_name = "audio_with_plot.html"
audio_path = "hello.wav"
save_audio_with_bokeh_plot_to_html(audio_path, html_file_name)

wandb_html = wandb.Html(html_file_name)
run = wandb.init(project="audio_test")
my_table = wandb.Table(columns=["audio_with_plot"], data=[[wandb_html], [wandb_html]])
run.log({"audio_table": my_table})
run.finish()

1.6.2 - Customize log axes

Use define_metric to set a custom x axis.Custom x-axes are useful in contexts where you need to log to different time steps in the past during training, asynchronously. For example, this can be useful in RL where you may track the per-episode reward and a per-step reward.

Try define_metric live in Google Colab →

Customize axes

By default, all metrics are logged against the same x-axis, which is the W&B internal step. Sometimes, you might want to log to a previous step, or use a different x-axis.

Here’s an example of setting a custom x-axis metric, instead of the default step.

import wandb

wandb.init()
# define our custom x axis metric
wandb.define_metric("custom_step")
# define which metrics will be plotted against it
wandb.define_metric("validation_loss", step_metric="custom_step")

for i in range(10):
    log_dict = {
        "train_loss": 1 / (i + 1),
        "custom_step": i**2,
        "validation_loss": 1 / (i + 1),
    }
    wandb.log(log_dict)

The x-axis can be set using globs as well. Currently, only globs that have string prefixes are available. The following example will plot all logged metrics with the prefix "train/" to the x-axis "train/step":

import wandb

wandb.init()
# define our custom x axis metric
wandb.define_metric("train/step")
# set all other train/ metrics to use this step
wandb.define_metric("train/*", step_metric="train/step")

for i in range(10):
    log_dict = {
        "train/step": 2**i,  # exponential growth w/ internal W&B step
        "train/loss": 1 / (i + 1),  # x-axis is train/step
        "train/accuracy": 1 - (1 / (1 + i)),  # x-axis is train/step
        "val/loss": 1 / (1 + i),  # x-axis is internal wandb step
    }
    wandb.log(log_dict)

1.6.3 - Log distributed training experiments

Use W&B to log distributed training experiments with multiple GPUs.

In distributed training, models are trained using multiple GPUs in parallel. W&B supports two patterns to track distributed training experiments:

  1. One process: Initialize W&B (wandb.init) and log experiments (wandb.log) from a single process. This is a common solution for logging distributed training experiments with the PyTorch Distributed Data Parallel (DDP) Class. In some cases, users funnel data over from other processes using a multiprocessing queue (or another communication primitive) to the main logging process.
  2. Many processes: Initialize W&B (wandb.init) and log experiments (wandb.log) in every process. Each process is effectively a separate experiment. Use the group parameter when you initialize W&B (wandb.init(group='group-name')) to define a shared experiment and group the logged values together in the W&B App UI.

The proceeding examples demonstrate how to track metrics with W&B using PyTorch DDP on two GPUs on a single machine. PyTorch DDP (DistributedDataParallel intorch.nn) is a popular library for distributed training. The basic principles apply to any distributed training setup, but the details of implementation may differ.

Method 1: One process

In this method we track only a rank 0 process. To implement this method, initialize W&B (wandb.init), commence a W&B Run, and log metrics (wandb.log) within the rank 0 process. This method is simple and robust, however, this method does not log model metrics from other processes (for example, loss values or inputs from their batches). System metrics, such as usage and memory, are still logged for all GPUs since that information is available to all processes.

Within our sample Python script (log-ddp.py), we check to see if the rank is 0. To do so, we first launch multiple processes with torch.distributed.launch. Next, we check the rank with the --local_rank command line argument. If the rank is set to 0, we set up wandb logging conditionally in the train() function. Within our Python script, we use the following check:

if __name__ == "__main__":
    # Get args
    args = parse_args()

    if args.local_rank == 0:  # only on main process
        # Initialize wandb run
        run = wandb.init(
            entity=args.entity,
            project=args.project,
        )
        # Train model with DDP
        train(args, run)
    else:
        train(args)

Explore the W&B App UI to view an example dashboard of metrics tracked from a single process. The dashboard displays system metrics such as temperature and utilization, that were tracked for both GPUs.

However, the loss values as a function epoch and batch size were only logged from a single GPU.

Method 2: Many processes

In this method, we track each process in the job, calling wandb.init() and wandb.log() from each process separately. We suggest you call wandb.finish() at the end of training, to mark that the run has completed so that all processes exit properly.

This method makes more information accessible for logging. However, note that multiple W&B Runs are reported in the W&B App UI. It might be difficult to keep track of W&B Runs across multiple experiments. To mitigate this, provide a value to the group parameter when you initialize W&B to keep track of which W&B Run belongs to a given experiment. For more information about how to keep track of training and evaluation W&B Runs in experiments, see Group Runs.

The following Python code snippet demonstrates how to set the group parameter when you initialize W&B:

if __name__ == "__main__":
    # Get args
    args = parse_args()
    # Initialize run
    run = wandb.init(
        entity=args.entity,
        project=args.project,
        group="DDP",  # all runs for the experiment in one group
    )
    # Train model with DDP
    train(args, run)

Explore the W&B App UI to view an example dashboard of metrics tracked from multiple processes. Note that there are two W&B Runs grouped together in the left sidebar. Click on a group to view the dedicated group page for the experiment. The dedicated group page displays metrics from each process separately.

The preceding image demonstrates the W&B App UI dashboard. On the sidebar we see two experiments. One labeled ’null’ and a second (bound by a yellow box) called ‘DPP’. If you expand the group (select the Group dropdown) you will see the W&B Runs that are associated to that experiment.

Use W&B Service to avoid common distributed training issues

There are two common issues you might encounter when using W&B and distributed training:

  1. Hanging at the beginning of training - A wandb process can hang if the wandb multiprocessing interferes with the multiprocessing from distributed training.
  2. Hanging at the end of training - A training job might hang if the wandb process does not know when it needs to exit. Call the wandb.finish() API at the end of your Python script to tell W&B that the Run finished. The wandb.finish() API will finish uploading data and will cause W&B to exit.

We recommend using the wandb service to improve the reliability of your distributed jobs. Both of the preceding training issues are commonly found in versions of the W&B SDK where wandb service is unavailable.

Enable W&B Service

Depending on your version of the W&B SDK, you might already have W&B Service enabled by default.

W&B SDK 0.13.0 and above

W&B Service is enabled by default for versions of the W&B SDK 0.13.0 and above.

W&B SDK 0.12.5 and above

Modify your Python script to enable W&B Service for W&B SDK version 0.12.5 and above. Use the wandb.require method and pass the string "service" within your main function:

if __name__ == "__main__":
    main()


def main():
    wandb.require("service")
    # rest-of-your-script-goes-here

For optimal experience we do recommend you upgrade to the latest version.

W&B SDK 0.12.4 and below

Set the WANDB_START_METHOD environment variable to "thread" to use multithreading instead if you use a W&B SDK version 0.12.4 and below.

Example use cases for multiprocessing

The following code snippets demonstrate common methods for advanced distributed use cases.

Spawn process

Use the wandb.setup()[line 8]method in your main function if you initiate a W&B Run in a spawned process:

import multiprocessing as mp


def do_work(n):
    run = wandb.init(config=dict(n=n))
    run.log(dict(this=n * n))


def main():
    wandb.setup()
    pool = mp.Pool(processes=4)
    pool.map(do_work, range(4))


if __name__ == "__main__":
    main()

Share a W&B Run

Pass a W&B Run object as an argument to share W&B Runs between processes:

def do_work(run):
    run.log(dict(this=1))


def main():
    run = wandb.init()
    p = mp.Process(target=do_work, kwargs=dict(run=run))
    p.start()
    p.join()


if __name__ == "__main__":
    main()

1.6.4 - Log media and objects

Log rich media, from 3D point clouds and molecules to HTML and histograms

We support images, video, audio, and more. Log rich media to explore your results and visually compare your runs, models, and datasets. Read on for examples and how-to guides.

Pre-requisites

In order to log media objects with the W&B SDK, you may need to install additional dependencies. You can install these dependencies by running the following command:

pip install wandb[media]

Images

Log images to track inputs, outputs, filter weights, activations, and more.

Inputs and outputs of an autoencoder network performing in-painting.

Images can be logged directly from NumPy arrays, as PIL images, or from the filesystem.

Each time you log images from a step, we save them to show in the UI. Expand the image panel, and use the step slider to look at images from different steps. This makes it easy to compare how a model’s output changes during training.

Provide arrays directly when constructing images manually, such as by using make_grid from torchvision.

Arrays are converted to png using Pillow.

images = wandb.Image(image_array, caption="Top: Output, Bottom: Input")

wandb.log({"examples": images})

We assume the image is gray scale if the last dimension is 1, RGB if it’s 3, and RGBA if it’s 4. If the array contains floats, we convert them to integers between 0 and 255. If you want to normalize your images differently, you can specify the mode manually or just supply a PIL.Image, as described in the “Logging PIL Images” tab of this panel.

For full control over the conversion of arrays to images, construct the PIL.Image yourself and provide it directly.

images = [PIL.Image.fromarray(image) for image in image_array]

wandb.log({"examples": [wandb.Image(image) for image in images]})

For even more control, create images however you like, save them to disk, and provide a filepath.

im = PIL.fromarray(...)
rgb_im = im.convert("RGB")
rgb_im.save("myimage.jpg")

wandb.log({"example": wandb.Image("myimage.jpg")})

Image overlays

Log semantic segmentation masks and interact with them (altering opacity, viewing changes over time, and more) via the W&B UI.

Interactive mask viewing in the W&B UI.

To log an overlay, you’ll need to provide a dictionary with the following keys and values to the masks keyword argument of wandb.Image:

  • one of two keys representing the image mask:
    • "mask_data": a 2D NumPy array containing an integer class label for each pixel
    • "path": (string) a path to a saved image mask file
  • "class_labels": (optional) a dictionary mapping the integer class labels in the image mask to their readable class names

To log multiple masks, log a mask dictionary with multiple keys, as in the code snippet below.

See a live example

Sample code

mask_data = np.array([[1, 2, 2, ..., 2, 2, 1], ...])

class_labels = {1: "tree", 2: "car", 3: "road"}

mask_img = wandb.Image(
    image,
    masks={
        "predictions": {"mask_data": mask_data, "class_labels": class_labels},
        "ground_truth": {
            # ...
        },
        # ...
    },
)

Log bounding boxes with images, and use filters and toggles to dynamically visualize different sets of boxes in the UI.

See a live example

To log a bounding box, you’ll need to provide a dictionary with the following keys and values to the boxes keyword argument of wandb.Image:

  • box_data: a list of dictionaries, one for each box. The box dictionary format is described below.
    • position: a dictionary representing the position and size of the box in one of two formats, as described below. Boxes need not all use the same format.
      • Option 1: {"minX", "maxX", "minY", "maxY"}. Provide a set of coordinates defining the upper and lower bounds of each box dimension.
      • Option 2: {"middle", "width", "height"}. Provide a set of coordinates specifying the middle coordinates as [x,y], and width and height as scalars.
    • class_id: an integer representing the class identity of the box. See class_labels key below.
    • scores: a dictionary of string labels and numeric values for scores. Can be used for filtering boxes in the UI.
    • domain: specify the units/format of the box coordinates. Set this to “pixel” if the box coordinates are expressed in pixel space, such as integers within the bounds of the image dimensions. By default, the domain is assumed to be a fraction/percentage of the image, expressed as a floating point number between 0 and 1.
    • box_caption: (optional) a string to be displayed as the label text on this box
  • class_labels: (optional) A dictionary mapping class_ids to strings. By default we will generate class labels class_0, class_1, etc.

Check out this example:

class_id_to_label = {
    1: "car",
    2: "road",
    3: "building",
    # ...
}

img = wandb.Image(
    image,
    boxes={
        "predictions": {
            "box_data": [
                {
                    # one box expressed in the default relative/fractional domain
                    "position": {"minX": 0.1, "maxX": 0.2, "minY": 0.3, "maxY": 0.4},
                    "class_id": 2,
                    "box_caption": class_id_to_label[2],
                    "scores": {"acc": 0.1, "loss": 1.2},
                    # another box expressed in the pixel domain
                    # (for illustration purposes only, all boxes are likely
                    # to be in the same domain/format)
                    "position": {"middle": [150, 20], "width": 68, "height": 112},
                    "domain": "pixel",
                    "class_id": 3,
                    "box_caption": "a building",
                    "scores": {"acc": 0.5, "loss": 0.7},
                    # ...
                    # Log as many boxes an as needed
                }
            ],
            "class_labels": class_id_to_label,
        },
        # Log each meaningful group of boxes with a unique key name
        "ground_truth": {
            # ...
        },
    },
)

wandb.log({"driving_scene": img})

Image overlays in Tables

Interactive Segmentation Masks in Tables

To log Segmentation Masks in tables, you will need to provide a wandb.Image object for each row in the table.

An example is provided in the Code snippet below:

table = wandb.Table(columns=["ID", "Image"])

for id, img, label in zip(ids, images, labels):
    mask_img = wandb.Image(
        img,
        masks={
            "prediction": {"mask_data": label, "class_labels": class_labels}
            # ...
        },
    )

    table.add_data(id, img)

wandb.log({"Table": table})
Interactive Bounding Boxes in Tables

To log Images with Bounding Boxes in tables, you will need to provide a wandb.Image object for each row in the table.

An example is provided in the code snippet below:

table = wandb.Table(columns=["ID", "Image"])

for id, img, boxes in zip(ids, images, boxes_set):
    box_img = wandb.Image(
        img,
        boxes={
            "prediction": {
                "box_data": [
                    {
                        "position": {
                            "minX": box["minX"],
                            "minY": box["minY"],
                            "maxX": box["maxX"],
                            "maxY": box["maxY"],
                        },
                        "class_id": box["class_id"],
                        "box_caption": box["caption"],
                        "domain": "pixel",
                    }
                    for box in boxes
                ],
                "class_labels": class_labels,
            }
        },
    )

Histograms

If a sequence of numbers, such as a list, array, or tensor, is provided as the first argument, we will construct the histogram automatically by calling np.histogram. All arrays/tensors are flattened. You can use the optional num_bins keyword argument to override the default of 64 bins. The maximum number of bins supported is 512.

In the UI, histograms are plotted with the training step on the x-axis, the metric value on the y-axis, and the count represented by color, to ease comparison of histograms logged throughout training. See the “Histograms in Summary” tab of this panel for details on logging one-off histograms.

wandb.log({"gradients": wandb.Histogram(grads)})
Gradients for the discriminator in a GAN.

If you want more control, call np.histogram and pass the returned tuple to the np_histogram keyword argument.

np_hist_grads = np.histogram(grads, density=True, range=(0.0, 1.0))
wandb.log({"gradients": wandb.Histogram(np_hist_grads)})
wandb.run.summary.update(  # if only in summary, only visible on overview tab
    {"final_logits": wandb.Histogram(logits)}
)

Log files in the formats 'obj', 'gltf', 'glb', 'babylon', 'stl', 'pts.json', and we will render them in the UI when your run finishes.

wandb.log(
    {
        "generated_samples": [
            wandb.Object3D(open("sample.obj")),
            wandb.Object3D(open("sample.gltf")),
            wandb.Object3D(open("sample.glb")),
        ]
    }
)
Ground truth and prediction of a headphones point cloud

See a live example

If histograms are in your summary they will appear on the Overview tab of the Run Page. If they are in your history, we plot a heatmap of bins over time on the Charts tab.

3D visualizations

Log 3D point clouds and Lidar scenes with bounding boxes. Pass in a NumPy array containing coordinates and colors for the points to render.

point_cloud = np.array([[0, 0, 0, COLOR]])

wandb.log({"point_cloud": wandb.Object3D(point_cloud)})

:::info The W&B UI truncates the data at 300,000 points. :::

NumPy array formats

Three different formats of NumPy arrays are supported for flexible color schemes.

  • [[x, y, z], ...] nx3
  • [[x, y, z, c], ...] nx4 | c is a category in the range [1, 14] (Useful for segmentation)
  • [[x, y, z, r, g, b], ...] nx6 | r,g,b are values in the range [0,255]for red, green, and blue color channels.

Python object

Using this schema, you can define a Python object and pass it in to the from_point_cloud method as shown below.

  • pointsis a NumPy array containing coordinates and colors for the points to render using the same formats as the simple point cloud renderer shown above.
  • boxes is a NumPy array of python dictionaries with three attributes:
    • corners- a list of eight corners
    • label- a string representing the label to be rendered on the box (Optional)
    • color- rgb values representing the color of the box
    • score - a numeric value that will be displayed on the bounding box that can be used to filter the bounding boxes shown (for example, to only show bounding boxes where score > 0.75). (Optional)
  • type is a string representing the scene type to render. Currently the only supported value is lidar/beta
point_list = [
    [
        2566.571924017235, # x
        746.7817289698219, # y
        -15.269245470863748,# z
        76.5, # red
        127.5, # green
        89.46617199365393 # blue
    ],
    [ 2566.592983606823, 746.6791987335685, -15.275803826279521, 76.5, 127.5, 89.45471117247024 ],
    [ 2566.616361739416, 746.4903185513501, -15.28628929674075, 76.5, 127.5, 89.41336375503832 ],
    [ 2561.706014951675, 744.5349468458361, -14.877496818222781, 76.5, 127.5, 82.21868245418283 ],
    [ 2561.5281847916694, 744.2546118233013, -14.867862032341005, 76.5, 127.5, 81.87824684536432 ],
    [ 2561.3693562897465, 744.1804761656741, -14.854129178142523, 76.5, 127.5, 81.64137897587152 ],
    [ 2561.6093071504515, 744.0287526628543, -14.882135189841177, 76.5, 127.5, 81.89871499537098 ],
    # ... and so on
]

run.log({"my_first_point_cloud": wandb.Object3D.from_point_cloud(
     points = point_list,
     boxes = [{
         "corners": [
                [ 2601.2765123137915, 767.5669506323393, -17.816764802288663 ],
                [ 2599.7259021588347, 769.0082337923552, -17.816764802288663 ],
                [ 2599.7259021588347, 769.0082337923552, -19.66876480228866 ],
                [ 2601.2765123137915, 767.5669506323393, -19.66876480228866 ],
                [ 2604.8684867834395, 771.4313904894723, -17.816764802288663 ],
                [ 2603.3178766284827, 772.8726736494882, -17.816764802288663 ],
                [ 2603.3178766284827, 772.8726736494882, -19.66876480228866 ],
                [ 2604.8684867834395, 771.4313904894723, -19.66876480228866 ]
        ],
         "color": [0, 0, 255], # color in RGB of the bounding box
         "label": "car", # string displayed on the bounding box
         "score": 0.6 # numeric displayed on the bounding box
     }],
     vectors = [
        {"start": [0, 0, 0], "end": [0.1, 0.2, 0.5], "color": [255, 0, 0]}, # color is optional
     ],
     point_cloud_type = "lidar/beta",
)})

When viewing a point cloud, you can hold control and use the mouse to move around inside the space.

Point cloud files

You can use the from_file method to load in a JSON file full of point cloud data.

run.log({"my_cloud_from_file": wandb.Object3D.from_file(
     "./my_point_cloud.pts.json"
)})

An example of how to format the point cloud data is shown below.

{
    "boxes": [
        {
            "color": [
                0,
                255,
                0
            ],
            "score": 0.35,
            "label": "My label",
            "corners": [
                [
                    2589.695869075582,
                    760.7400443552185,
                    -18.044831294622487
                ],
                [
                    2590.719039645323,
                    762.3871153874499,
                    -18.044831294622487
                ],
                [
                    2590.719039645323,
                    762.3871153874499,
                    -19.54083129462249
                ],
                [
                    2589.695869075582,
                    760.7400443552185,
                    -19.54083129462249
                ],
                [
                    2594.9666662674313,
                    757.4657929961453,
                    -18.044831294622487
                ],
                [
                    2595.9898368371723,
                    759.1128640283766,
                    -18.044831294622487
                ],
                [
                    2595.9898368371723,
                    759.1128640283766,
                    -19.54083129462249
                ],
                [
                    2594.9666662674313,
                    757.4657929961453,
                    -19.54083129462249
                ]
            ]
        }
    ],
    "points": [
        [
            2566.571924017235,
            746.7817289698219,
            -15.269245470863748,
            76.5,
            127.5,
            89.46617199365393
        ],
        [
            2566.592983606823,
            746.6791987335685,
            -15.275803826279521,
            76.5,
            127.5,
            89.45471117247024
        ],
        [
            2566.616361739416,
            746.4903185513501,
            -15.28628929674075,
            76.5,
            127.5,
            89.41336375503832
        ]
    ],
    "type": "lidar/beta"
}

NumPy arrays

Using the same array formats defined above, you can use numpy arrays directly with the from_numpy method to define a point cloud.

run.log({"my_cloud_from_numpy_xyz": wandb.Object3D.from_numpy(
     np.array(  
        [
            [0.4, 1, 1.3], # x, y, z
            [1, 1, 1], 
            [1.2, 1, 1.2]
        ]
    )
)})
run.log({"my_cloud_from_numpy_cat": wandb.Object3D.from_numpy(
     np.array(  
        [
            [0.4, 1, 1.3, 1], # x, y, z, category 
            [1, 1, 1, 1], 
            [1.2, 1, 1.2, 12], 
            [1.2, 1, 1.3, 12], 
            [1.2, 1, 1.4, 12], 
            [1.2, 1, 1.5, 12], 
            [1.2, 1, 1.6, 11], 
            [1.2, 1, 1.7, 11], 
        ]
    )
)})
run.log({"my_cloud_from_numpy_rgb": wandb.Object3D.from_numpy(
     np.array(  
        [
            [0.4, 1, 1.3, 255, 0, 0], # x, y, z, r, g, b 
            [1, 1, 1, 0, 255, 0], 
            [1.2, 1, 1.3, 0, 255, 255],
            [1.2, 1, 1.4, 0, 255, 255],
            [1.2, 1, 1.5, 0, 0, 255],
            [1.2, 1, 1.1, 0, 0, 255],
            [1.2, 1, 0.9, 0, 0, 255],
        ]
    )
)})
wandb.log({"protein": wandb.Molecule("6lu7.pdb")})

Log molecular data in any of 10 file types:pdb, pqr, mmcif, mcif, cif, sdf, sd, gro, mol2, or mmtf.

W&B also supports logging molecular data from SMILES strings, rdkit mol files, and rdkit.Chem.rdchem.Mol objects.

resveratrol = rdkit.Chem.MolFromSmiles("Oc1ccc(cc1)C=Cc1cc(O)cc(c1)O")

wandb.log(
    {
        "resveratrol": wandb.Molecule.from_rdkit(resveratrol),
        "green fluorescent protein": wandb.Molecule.from_rdkit("2b3p.mol"),
        "acetaminophen": wandb.Molecule.from_smiles("CC(=O)Nc1ccc(O)cc1"),
    }
)

When your run finishes, you’ll be able to interact with 3D visualizations of your molecules in the UI.

See a live example using AlphaFold

PNG image

wandb.Image converts numpy arrays or instances of PILImage to PNGs by default.

wandb.log({"example": wandb.Image(...)})
# Or multiple images
wandb.log({"example": [wandb.Image(...) for img in images]})

Video

Videos are logged using the wandb.Video data type:

wandb.log({"example": wandb.Video("myvideo.mp4")})

Now you can view videos in the media browser. Go to your project workspace, run workspace, or report and click Add visualization to add a rich media panel.

2D view of a molecule

You can log a 2D view of a molecule using the wandb.Image data type and rdkit:

molecule = rdkit.Chem.MolFromSmiles("CC(=O)O")
rdkit.Chem.AllChem.Compute2DCoords(molecule)
rdkit.Chem.AllChem.GenerateDepictionMatching2DStructure(molecule, molecule)
pil_image = rdkit.Chem.Draw.MolToImage(molecule, size=(300, 300))

wandb.log({"acetic_acid": wandb.Image(pil_image)})

Other media

W&B also supports logging of a variety of other media types.

wandb.log({"whale songs": wandb.Audio(np_array, caption="OooOoo", sample_rate=32)})

The maximum number of audio clips that can be logged per step is 100.

wandb.log({"video": wandb.Video(numpy_array_or_path_to_video, fps=4, format="gif")})

If a numpy array is supplied we assume the dimensions are, in order: time, channels, width, height. By default we create a 4 fps gif image (ffmpeg and the moviepy python library are required when passing numpy objects). Supported formats are "gif", "mp4", "webm", and "ogg". If you pass a string to wandb.Video we assert the file exists and is a supported format before uploading to wandb. Passing a BytesIO object will create a temporary file with the specified format as the extension.

On the W&B Run and Project Pages, you will see your videos in the Media section.

Use wandb.Table to log text in tables to show up in the UI. By default, the column headers are ["Input", "Output", "Expected"]. To ensure optimal UI performance, the default maximum number of rows is set to 10,000. However, users can explicitly override the maximum with wandb.Table.MAX_ROWS = {DESIRED_MAX}.

columns = ["Text", "Predicted Sentiment", "True Sentiment"]
# Method 1
data = [["I love my phone", "1", "1"], ["My phone sucks", "0", "-1"]]
table = wandb.Table(data=data, columns=columns)
wandb.log({"examples": table})

# Method 2
table = wandb.Table(columns=columns)
table.add_data("I love my phone", "1", "1")
table.add_data("My phone sucks", "0", "-1")
wandb.log({"examples": table})

You can also pass a pandas DataFrame object.

table = wandb.Table(dataframe=my_dataframe)
wandb.log({"custom_file": wandb.Html(open("some.html"))})
wandb.log({"custom_string": wandb.Html('<a href="https://mysite">Link</a>')})

Custom html can be logged at any key, and this exposes an HTML panel on the run page. By default we inject default styles, you can turn off default styles by passing inject=False.

wandb.log({"custom_file": wandb.Html(open("some.html"), inject=False)})

1.6.5 - Log models

Log models

The following guide describes how to log models to a W&B run and interact with them.

Log a model to a run

Use the log_model to log a model artifact that contains content within a directory you specify. The log_model method also marks the resulting model artifact as an output of the W&B run.

You can track a model’s dependencies and the model’s associations if you mark the model as the input or output of a W&B run. View the lineage of the model within the W&B App UI. See the Explore and traverse artifact graphs page within the Artifacts chapter for more information.

Provide the path where your model files are saved to the path parameter. The path can be a local file, directory, or reference URI to an external bucket such as s3://bucket/path.

Ensure to replace values enclosed in <> with your own.

import wandb

# Initialize a W&B run
run = wandb.init(project="<your-project>", entity="<your-entity>")

# Log the model
run.log_model(path="<path-to-model>", name="<name>")

Optionally provide a name for the model artifact for the name parameter. If name is not specified, W&B will use the basename of the input path prepended with the run ID as the name.

See log_model in the API Reference guide for more information on possible parameters.

Example: Log a model to a run
import os
import wandb
from tensorflow import keras
from tensorflow.keras import layers

config = {"optimizer": "adam", "loss": "categorical_crossentropy"}

# Initialize a W&B run
run = wandb.init(entity="charlie", project="mnist-experiments", config=config)

# Hyperparameters
loss = run.config["loss"]
optimizer = run.config["optimizer"]
metrics = ["accuracy"]
num_classes = 10
input_shape = (28, 28, 1)

# Training algorithm
model = keras.Sequential(
    [
        layers.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)

# Configure the model for training
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)

# Save model
model_filename = "model.h5"
local_filepath = "./"
full_path = os.path.join(local_filepath, model_filename)
model.save(filepath=full_path)

# Log the model to the W&B run
run.log_model(path=full_path, name="MNIST")
run.finish()

When the user called log_model, a model artifact named MNIST was created and the file model.h5 was added to the model artifact. Your terminal or notebook will print information of where to find information about the run the model was logged to.

View run different-surf-5 at: https://wandb.ai/charlie/mnist-experiments/runs/wlby6fuw
Synced 5 W&B file(s), 0 media file(s), 1 artifact file(s) and 0 other file(s)
Find logs at: ./wandb/run-20231206_103511-wlby6fuw/logs

Download and use a logged model

Use the use_model function to access and download models files previously logged to a W&B run.

Provide the name of the model artifact where the model files you are want to retrieve are stored. The name you provide must match the name of an existing logged model artifact.

If you did not define name when originally logged the files with log_model, the default name assigned is the basename of the input path, prepended with the run ID.

Ensure to replace other the values enclosed in <> with your own:

import wandb

# Initialize a run
run = wandb.init(project="<your-project>", entity="<your-entity>")

# Access and download model. Returns path to downloaded artifact
downloaded_model_path = run.use_model(name="<your-model-name>")

The use_model function returns the path of downloaded model files. Keep track of this path if you want to link this model later. In the preceding code snippet, the returned path is stored in a variable called downloaded_model_path.

Example: Download and use a logged model

For example, in the proceeding code snippet a user called the use_model API. They specified the name of the model artifact they want to fetch and they also provided a version/alias. They then stored the path that is returned from the API to the downloaded_model_path variable.

import wandb

entity = "luka"
project = "NLP_Experiments"
alias = "latest"  # semantic nickname or identifier for the model version
model_artifact_name = "fine-tuned-model"

# Initialize a run
run = wandb.init(project=project, entity=entity)
# Access and download model. Returns path to downloaded artifact
downloaded_model_path = run.use_model(name = f"{model_artifact_name}:{alias}") 

See use_model in the API Reference guide for more information on possible parameters and return type.

Use the link_model method to log model files to a W&B run and link it to the W&B Model Registry. If no registered model exists, W&B will create a new one for you with the name you provide for the registered_model_name parameter.

A Registered Model is a collection or folder of linked model versions in the Model Registry. Registered models typically represent candidate models for a single modeling use case or task.

The proceeding code snippet shows how to link a model with the link_model API. Ensure to replace other the values enclosed in <> with your own:

import wandb

run = wandb.init(entity="<your-entity>", project="<your-project>")
run.link_model(path="<path-to-model>", registered_model_name="<registered-model-name>")
run.finish()

See link_model in the API Reference guide for more information on optional parameters.

If the registered-model-name matches the name of a registered model that already exists within the Model Registry, the model will be linked to that registered model. If no such registered model exists, a new one will be created and the model will be the first one linked.

For example, suppose you have an existing registered model named “Fine-Tuned-Review-Autocompletion” in your Model Registry (see example here). And suppose that a few model versions are already linked to it: v0, v1, v2. If you call link_model with registered-model-name="Fine-Tuned-Review-Autocompletion", the new model will be linked to this existing registered model as v3. If no registered model with this name exists, a new one will be created and the new model will be linked as v0.

Example: Log and link a model to the W&B Model Registry

For example, the proceeding code snippet logs model files and links the model to a registered model name "Fine-Tuned-Review-Autocompletion".

To do this, a user calls the link_model API. When they call the API, they provide a local filepath that points the content of the model (path) and they provide a name for the registered model to link it to (registered_model_name).

import wandb

path = "/local/dir/model.pt"
registered_model_name = "Fine-Tuned-Review-Autocompletion"

run = wandb.init(project="llm-evaluation", entity="noa")
run.link_model(path=path, registered_model_name=registered_model_name)
run.finish()

1.6.6 - Log summary metrics

In addition to values that change over time during training, it is often important to track a single value that summarizes a model or a preprocessing step. Log this information in a W&B Run’s summary dictionary. A Run’s summary dictionary can handle numpy arrays, PyTorch tensors or TensorFlow tensors. When a value is one of these types we persist the entire tensor in a binary file and store high level metrics in the summary object, such as min, mean, variance, percentiles, and more.

The last value logged with wandb.log is automatically set as the summary dictionary in a W&B Run. If a summary metric dictionary is modified, the previous value is lost.

The proceeding code snippet demonstrates how to provide a custom summary metric to W&B:

wandb.init(config=args)

best_accuracy = 0
for epoch in range(1, args.epochs + 1):
    test_loss, test_accuracy = test()
    if test_accuracy > best_accuracy:
        wandb.run.summary["best_accuracy"] = test_accuracy
        best_accuracy = test_accuracy

You can update the summary attribute of an existing W&B Run after training has completed. Use the W&B Public API to update the summary attribute:

api = wandb.Api()
run = api.run("username/project/run_id")
run.summary["tensor"] = np.random.random(1000)
run.summary.update()

Customize summary metrics

Custom metric summaries are useful to capture model performance at the best step, instead of the last step, of training in your wandb.summary. For example, you might want to capture the maximum accuracy or the minimum loss value, instead of the final value.

Summary metrics can be controlled using the summary argument in define_metric which accepts the following values: "min", "max", "mean" ,"best", "last" and "none". The "best" parameter can only be used in conjunction with the optional objective argument which accepts values "minimize" and "maximize". Here’s an example of capturing the lowest value of loss and the maximum value of accuracy in the summary, instead of the default summary behavior, which uses the final value from history.

import wandb
import random

random.seed(1)
wandb.init()
# define a metric we are interested in the minimum of
wandb.define_metric("loss", summary="min")
# define a metric we are interested in the maximum of
wandb.define_metric("acc", summary="max")
for i in range(10):
    log_dict = {
        "loss": random.uniform(0, 1 / (i + 1)),
        "acc": random.uniform(1 / (i + 1), 1),
    }
    wandb.log(log_dict)

Here’s what the resulting min and max summary values look like, in pinned columns in the sidebar on the Project Page workspace:

Project Page Sidebar

1.6.7 - Log tables

Log tables with W&B.

Use wandb.Table to log data to visualize and query with W&B. In this guide, learn how to:

  1. Create Tables
  2. Add Data
  3. Retrieve Data
  4. Save Tables

Create tables

To define a Table, specify the columns you want to see for each row of data. Each row might be a single item in your training dataset, a particular step or epoch during training, a prediction made by your model on a test item, an object generated by your model, etc. Each column has a fixed type: numeric, text, boolean, image, video, audio, etc. You do not need to specify the type in advance. Give each column a name, and make sure to only pass data of that type into that column index. For a more detailed example, see this report.

Use the wandb.Table constructor in one of two ways:

  1. List of Rows: Log named columns and rows of data. For example the proceeding code snippet generates a table with two rows and three columns:
wandb.Table(columns=["a", "b", "c"], data=[["1a", "1b", "1c"], ["2a", "2b", "2c"]])
  1. Pandas DataFrame: Log a DataFrame using wandb.Table(dataframe=my_df). Column names will be extracted from the DataFrame.

From an existing array or dataframe

# assume a model has returned predictions on four images
# with the following fields available:
# - the image id
# - the image pixels, wrapped in a wandb.Image()
# - the model's predicted label
# - the ground truth label
my_data = [
    [0, wandb.Image("img_0.jpg"), 0, 0],
    [1, wandb.Image("img_1.jpg"), 8, 0],
    [2, wandb.Image("img_2.jpg"), 7, 1],
    [3, wandb.Image("img_3.jpg"), 1, 1],
]

# create a wandb.Table() with corresponding columns
columns = ["id", "image", "prediction", "truth"]
test_table = wandb.Table(data=my_data, columns=columns)

Add data

Tables are mutable. As your script executes you can add more data to your table, up to 200,000 rows. There are two ways to add data to a table:

  1. Add a Row: table.add_data("3a", "3b", "3c"). Note that the new row is not represented as a list. If your row is in list format, use the star notation, * ,to expand the list to positional arguments: table.add_data(*my_row_list). The row must contain the same number of entries as there are columns in the table.
  2. Add a Column: table.add_column(name="col_name", data=col_data). Note that the length of col_data must be equal to the table’s current number of rows. Here, col_data can be a list data, or a NumPy NDArray.

Adding data incrementally

This code sample shows how to create and populate a W&B table incrementally. You define the table with predefined columns, including confidence scores for all possible labels, and add data row by row during inference. You can also add data to tables incrementally when resuming runs.

# Define the columns for the table, including confidence scores for each label
columns = ["id", "image", "guess", "truth"]
for digit in range(10):  # Add confidence score columns for each digit (0-9)
    columns.append(f"score_{digit}")

# Initialize the table with the defined columns
test_table = wandb.Table(columns=columns)

# Iterate through the test dataset and add data to the table row by row
# Each row includes the image ID, image, predicted label, true label, and confidence scores
for img_id, img in enumerate(mnist_test_data):
    true_label = mnist_test_data_labels[img_id]  # Ground truth label
    guess_label = my_model.predict(img)  # Predicted label
    test_table.add_data(
        img_id, wandb.Image(img), guess_label, true_label
    )  # Add row data to the table

Adding data to resumed runs

You can incrementally update a W&B table in resumed runs by loading an existing table from an artifact, retrieving the last row of data, and adding the updated metrics. Then, reinitialize the table for compatibility and log the updated version back to W&B.

# Load the existing table from the artifact
best_checkpt_table = wandb.use_artifact(table_tag).get(table_name)

# Get the last row of data from the table for resuming
best_iter, best_metric_max, best_metric_min = best_checkpt_table.data[-1]

# Update the best metrics as needed

# Add the updated data to the table
best_checkpt_table.add_data(best_iter, best_metric_max, best_metric_min)

# Reinitialize the table with its updated data to ensure compatibility
best_checkpt_table = wandb.Table(
    columns=["col1", "col2", "col3"], data=best_checkpt_table.data
)

# Log the updated table to Weights & Biases
wandb.log({table_name: best_checkpt_table})

Retrieve data

Once data is in a Table, access it by column or by row:

  1. Row Iterator: Users can use the row iterator of Table such as for ndx, row in table.iterrows(): ... to efficiently iterate over the data’s rows.
  2. Get a Column: Users can retrieve a column of data using table.get_column("col_name") . As a convenience, users can pass convert_to="numpy" to convert the column to a NumPy NDArray of primitives. This is useful if your column contains media types such as wandb.Image so that you can access the underlying data directly.

Save tables

After you generate a table of data in your script, for example a table of model predictions, save it to W&B to visualize the results live.

Log a table to a run

Use wandb.log() to save your table to the run, like so:

run = wandb.init()
my_table = wandb.Table(columns=["a", "b"], data=[["1a", "1b"], ["2a", "2b"]])
run.log({"table_key": my_table})

Each time a table is logged to the same key, a new version of the table is created and stored in the backend. This means you can log the same table across multiple training steps to see how model predictions improve over time, or compare tables across different runs, as long as they’re logged to the same key. You can log up to 200,000 rows.

Access tables programmatically

In the backend, Tables are persisted as Artifacts. If you are interested in accessing a specific version, you can do so with the artifact API:

with wandb.init() as run:
    my_table = run.use_artifact("run-<run-id>-<table-name>:<tag>").get("<table-name>")

For more information on Artifacts, see the Artifacts Chapter in the Developer Guide.

Visualize tables

Any table logged this way will show up in your Workspace on both the Run Page and the Project Page. For more information, see Visualize and Analyze Tables.

Artifact tables

Use artifact.add() to log tables to the Artifacts section of your run instead of the workspace. This could be useful if you have a dataset that you want to log once and then reference for future runs.

run = wandb.init(project="my_project")
# create a wandb Artifact for each meaningful step
test_predictions = wandb.Artifact("mnist_test_preds", type="predictions")

# [build up your predictions data as above]
test_table = wandb.Table(data=data, columns=columns)
test_predictions.add(test_table, "my_test_key")
run.log_artifact(test_predictions)

Refer to this Colab for a detailed example of artifact.add() with image data and this Report for an example of how to use Artifacts and Tables to version control and deduplicate tabular data.

Join Artifact tables

You can join tables you have locally constructed or tables you have retrieved from other artifacts using wandb.JoinedTable(table_1, table_2, join_key).

Args Description
table_1 (str, wandb.Table, ArtifactEntry) the path to a wandb.Table in an artifact, the table object, or ArtifactEntry
table_2 (str, wandb.Table, ArtifactEntry) the path to a wandb.Table in an artifact, the table object, or ArtifactEntry
join_key (str, [str, str]) key or keys on which to perform the join

To join two Tables you have logged previously in an artifact context, fetch them from the artifact and join the result into a new Table.

For example, demonstrates how to read one Table of original songs called 'original_songs' and another Table of synthesized versions of the same songs called 'synth_songs'. The proceeding code example joins the two tables on "song_id", and uploads the resulting table as a new W&B Table:

import wandb

run = wandb.init(project="my_project")

# fetch original songs table
orig_songs = run.use_artifact("original_songs:latest")
orig_table = orig_songs.get("original_samples")

# fetch synthesized songs table
synth_songs = run.use_artifact("synth_songs:latest")
synth_table = synth_songs.get("synth_samples")

# join tables on "song_id"
join_table = wandb.JoinedTable(orig_table, synth_table, "song_id")
join_at = wandb.Artifact("synth_summary", "analysis")

# add table to artifact and log to W&B
join_at.add(join_table, "synth_explore")
run.log_artifact(join_at)

Read this tutorial for an example on how to combine two previously stored tables stored in different Artifact objects.

1.6.8 - Track CSV files with experiments

Importing and logging data into W&B

Use the W&B Python Library to log a CSV file and visualize it in a W&B Dashboard. W&B Dashboard are the central place to organize and visualize results from your machine learning models. This is particularly useful if you have a CSV file that contains information of previous machine learning experiments that are not logged in W&B or if you have CSV file that contains a dataset.

Import and log your dataset CSV file

We suggest you utilize W&B Artifacts to make it easier to re-use the contents of the CSV file easier to use.

  1. To get started, first import your CSV file. In the proceeding code snippet, replace the iris.csv filename with the name of your CSV filename:
import wandb
import pandas as pd

# Read our CSV into a new DataFrame
new_iris_dataframe = pd.read_csv("iris.csv")
  1. Convert the CSV file to a W&B Table to utilize W&B Dashboards.
# Convert the DataFrame into a W&B Table
iris_table = wandb.Table(dataframe=new_iris_dataframe)
  1. Next, create a W&B Artifact and add the table to the Artifact:
# Add the table to an Artifact to increase the row
# limit to 200000 and make it easier to reuse
iris_table_artifact = wandb.Artifact("iris_artifact", type="dataset")
iris_table_artifact.add(iris_table, "iris_table")

# Log the raw csv file within an artifact to preserve our data
iris_table_artifact.add_file("iris.csv")

For more information about W&B Artifacts, see the Artifacts chapter.

  1. Lastly, start a new W&B Run to track and log to W&B with wandb.init:
# Start a W&B run to log data
run = wandb.init(project="tables-walkthrough")

# Log the table to visualize with a run...
run.log({"iris": iris_table})

# and Log as an Artifact to increase the available row limit!
run.log_artifact(iris_table_artifact)

The wandb.init() API spawns a new background process to log data to a Run, and it synchronizes data to wandb.ai (by default). View live visualizations on your W&B Workspace Dashboard. The following image demonstrates the output of the code snippet demonstration.

CSV file imported into W&B Dashboard

The full script with the preceding code snippets is found below:

import wandb
import pandas as pd

# Read our CSV into a new DataFrame
new_iris_dataframe = pd.read_csv("iris.csv")

# Convert the DataFrame into a W&B Table
iris_table = wandb.Table(dataframe=new_iris_dataframe)

# Add the table to an Artifact to increase the row
# limit to 200000 and make it easier to reuse
iris_table_artifact = wandb.Artifact("iris_artifact", type="dataset")
iris_table_artifact.add(iris_table, "iris_table")

# log the raw csv file within an artifact to preserve our data
iris_table_artifact.add_file("iris.csv")

# Start a W&B run to log data
run = wandb.init(project="tables-walkthrough")

# Log the table to visualize with a run...
run.log({"iris": iris_table})

# and Log as an Artifact to increase the available row limit!
run.log_artifact(iris_table_artifact)

# Finish the run (useful in notebooks)
run.finish()

Import and log your CSV of Experiments

In some cases, you might have your experiment details in a CSV file. Common details found in such CSV files include:

  • A name for the experiment run
  • Initial notes
  • Tags to differentiate the experiments
  • Configurations needed for your experiment (with the added benefit of being able to utilize our Sweeps Hyperparameter Tuning).
Experiment Model Name Notes Tags Num Layers Final Train Acc Final Val Acc Training Losses
Experiment 1 mnist-300-layers Overfit way too much on training data [latest] 300 0.99 0.90 [0.55, 0.45, 0.44, 0.42, 0.40, 0.39]
Experiment 2 mnist-250-layers Current best model [prod, best] 250 0.95 0.96 [0.55, 0.45, 0.44, 0.42, 0.40, 0.39]
Experiment 3 mnist-200-layers Did worse than the baseline model. Need to debug [debug] 200 0.76 0.70 [0.55, 0.45, 0.44, 0.42, 0.40, 0.39]
Experiment N mnist-X-layers NOTES […, …]

W&B can take CSV files of experiments and convert it into a W&B Experiment Run. The proceeding code snippets and code script demonstrates how to import and log your CSV file of experiments:

  1. To get started, first read in your CSV file and convert it into a Pandas DataFrame. Replace "experiments.csv" with the name of your CSV file:
import wandb
import pandas as pd

FILENAME = "experiments.csv"
loaded_experiment_df = pd.read_csv(FILENAME)

PROJECT_NAME = "Converted Experiments"

EXPERIMENT_NAME_COL = "Experiment"
NOTES_COL = "Notes"
TAGS_COL = "Tags"
CONFIG_COLS = ["Num Layers"]
SUMMARY_COLS = ["Final Train Acc", "Final Val Acc"]
METRIC_COLS = ["Training Losses"]

# Format Pandas DataFrame to make it easier to work with
for i, row in loaded_experiment_df.iterrows():
    run_name = row[EXPERIMENT_NAME_COL]
    notes = row[NOTES_COL]
    tags = row[TAGS_COL]

    config = {}
    for config_col in CONFIG_COLS:
        config[config_col] = row[config_col]

    metrics = {}
    for metric_col in METRIC_COLS:
        metrics[metric_col] = row[metric_col]

    summaries = {}
    for summary_col in SUMMARY_COLS:
        summaries[summary_col] = row[summary_col]
  1. Next, start a new W&B Run to track and log to W&B with wandb.init():
run = wandb.init(
    project=PROJECT_NAME, name=run_name, tags=tags, notes=notes, config=config
)

As an experiment runs, you might want to log every instance of your metrics so they are available to view, query, and analyze with W&B. Use the run.log() command to accomplish this:

run.log({key: val})

You can optionally log a final summary metric to define the outcome of the run. Use the W&B define_metric API to accomplish this. In this example case, we will add the summary metrics to our run with run.summary.update():

run.summary.update(summaries)

For more information about summary metrics, see Log Summary Metrics.

Below is the full example script that converts the above sample table into a W&B Dashboard:

FILENAME = "experiments.csv"
loaded_experiment_df = pd.read_csv(FILENAME)

PROJECT_NAME = "Converted Experiments"

EXPERIMENT_NAME_COL = "Experiment"
NOTES_COL = "Notes"
TAGS_COL = "Tags"
CONFIG_COLS = ["Num Layers"]
SUMMARY_COLS = ["Final Train Acc", "Final Val Acc"]
METRIC_COLS = ["Training Losses"]

for i, row in loaded_experiment_df.iterrows():
    run_name = row[EXPERIMENT_NAME_COL]
    notes = row[NOTES_COL]
    tags = row[TAGS_COL]

    config = {}
    for config_col in CONFIG_COLS:
        config[config_col] = row[config_col]

    metrics = {}
    for metric_col in METRIC_COLS:
        metrics[metric_col] = row[metric_col]

    summaries = {}
    for summary_col in SUMMARY_COLS:
        summaries[summary_col] = row[summary_col]

    run = wandb.init(
        project=PROJECT_NAME, name=run_name, tags=tags, notes=notes, config=config
    )

    for key, val in metrics.items():
        if isinstance(val, list):
            for _val in val:
                run.log({key: _val})
        else:
            run.log({key: val})

    run.summary.update(summaries)
    run.finish()

1.7 - Track Jupyter notebooks

se W&B with Jupyter to get interactive visualizations without leaving your notebook.

Use W&B with Jupyter to get interactive visualizations without leaving your notebook. Combine custom analysis, experiments, and prototypes, all fully logged.

Use cases for W&B with Jupyter notebooks

  1. Iterative experimentation: Run and re-run experiments, tweaking parameters, and have all the runs you do saved automatically to W&B without having to take manual notes along the way.
  2. Code saving: When reproducing a model, it’s hard to know which cells in a notebook ran, and in which order. Turn on code saving on your settings page to save a record of cell execution for each experiment.
  3. Custom analysis: Once runs are logged to W&B, it’s easy to get a dataframe from the API and do custom analysis, then log those results to W&B to save and share in reports.

Getting started in a notebook

Start your notebook with the following code to install W&B and link your account:

!pip install wandb -qqq
import wandb
wandb.login()

Next, set up your experiment and save hyperparameters:

wandb.init(
    project="jupyter-projo",
    config={
        "batch_size": 128,
        "learning_rate": 0.01,
        "dataset": "CIFAR-100",
    },
)

After running wandb.init() , start a new cell with %%wandb to see live graphs in the notebook. If you run this cell multiple times, data will be appended to the run.

%%wandb

# Your training loop here

Try it for yourself in this example notebook.

Rendering live W&B interfaces directly in your notebooks

You can also display any existing dashboards, sweeps, or reports directly in your notebook using the %wandb magic:

# Display a project workspace
%wandb USERNAME/PROJECT
# Display a single run
%wandb USERNAME/PROJECT/runs/RUN_ID
# Display a sweep
%wandb USERNAME/PROJECT/sweeps/SWEEP_ID
# Display a report
%wandb USERNAME/PROJECT/reports/REPORT_ID
# Specify the height of embedded iframe
%wandb USERNAME/PROJECT -h 2048

As an alternative to the %%wandb or %wandb magics, after running wandb.init() you can end any cell with wandb.run to show in-line graphs, or call ipython.display(...) on any report, sweep, or run object returned from our apis.

# Initialize wandb.run first
wandb.init()

# If cell outputs wandb.run, you'll see live graphs
wandb.run

Additional Jupyter features in W&B

  1. Easy authentication in Colab: When you call wandb.init for the first time in a Colab, we automatically authenticate your runtime if you’re currently logged in to W&B in your browser. On the overview tab of your run page, you’ll see a link to the Colab.
  2. Jupyter Magic: Display dashboards, sweeps and reports directly in your notebooks. The %wandb magic accepts a path to your project, sweeps or reports and will render the W&B interface directly in the notebook.
  3. Launch dockerized Jupyter: Call wandb docker --jupyter to launch a docker container, mount your code in it, ensure Jupyter is installed, and launch on port 8888.
  4. Run cells in arbitrary order without fear: By default, we wait until the next time wandb.init is called to mark a run as finished. That allows you to run multiple cells (say, one to set up data, one to train, one to test) in whatever order you like and have them all log to the same run. If you turn on code saving in settings, you’ll also log the cells that were executed, in order and in the state in which they were run, enabling you to reproduce even the most non-linear of pipelines. To mark a run as complete manually in a Jupyter notebook, call run.finish.
import wandb

run = wandb.init()

# training script and logging goes here

run.finish()

1.8 - Experiments limits and performance

Keep your pages in W&B faster and more responsive by logging within these suggested bounds.

Keep your pages in W&B faster and more responsive by logging within the following suggested bounds.

Logged metrics

Use wandb.log to track experiment metrics. Once logged, these metrics generate charts and show up in tables. Too much logged data can make the app slow.

Distinct metric count

For faster performance, keep the total number of distinct metrics in a project under 10,000.

import wandb

wandb.log(
    {
        "a": 1,  # "a" is a distinct metric
        "b": {
            "c": "hello",  # "b.c" is a distinct metric
            "d": [1, 2, 3],  # "b.d" is a distinct metric
        },
    }
)

If your workspace suddenly slows down, check whether recent runs have unintentionally logged thousands of new metrics. (This is easiest to spot by seeing sections with thousands of plots that have only one or two runs visible on them.) If they have, consider deleting those runs and recreating them with the desired metrics.

Value width

Limit the size of a single logged value to under 1 MB and the total size of a single wandb.log call to under 25 MB. This limit does not apply to wandb.Media types like wandb.Image, wandb.Audio, etc.

# ❌ not recommended
wandb.log({"wide_key": range(10000000)})

# ❌ not recommended
with f as open("large_file.json", "r"):
    large_data = json.load(f)
    wandb.log(large_data)

Wide values can affect the plot load times for all metrics in the run, not just the metric with the wide values.

Metric frequency

Pick a logging frequency that is appropriate to the metric you are logging. As a general rule of thumb, the wider the metric the less frequently you should log it. W&B recommends:

  • Scalars: <100,000 logged points per metric
  • Media: <50,000 logged points per metric
  • Histograms: <10,000 logged points per metric
# Training loop with 1m total steps
for step in range(1000000):
    # ❌ not recommended
    wandb.log(
        {
            "scalar": step,  # 100,000 scalars
            "media": wandb.Image(...),  # 100,000 images
            "histogram": wandb.Histogram(...),  # 100,000 histograms
        }
    )

    # ✅ recommended
    if step % 1000 == 0:
        wandb.log(
            {
                "histogram": wandb.Histogram(...),  # 10,000 histograms
            },
            commit=False,
        )
    if step % 200 == 0:
        wandb.log(
            {
                "media": wandb.Image(...),  # 50,000 images
            },
            commit=False,
        )
    if step % 100 == 0:
        wandb.log(
            {
                "scalar": step,  # 100,000 scalars
            },
            commit=True,
        )  # Commit batched, per-step metrics together

Config size

Limit the total size of your run config to less than 10 MB. Logging large values could slow down your project workspaces and runs table operations.

# ✅ recommended
wandb.init(
    config={
        "lr": 0.1,
        "batch_size": 32,
        "epochs": 4,
    }
)

# ❌ not recommended
wandb.init(
    config={
        "steps": range(10000000),
    }
)

# ❌ not recommended
with f as open("large_config.json", "r"):
    large_config = json.load(f)
    wandb.init(config=large_config)

Run count

For faster loading times, keep the total number of runs in a single project under 10,000. Large run counts can slow down project workspaces and runs table operations, especially when grouping is enabled or runs have a large count of distinct metrics.

If you find that you or your team are frequently accessing the same set of runs (for example, recent runs), consider bulk moving other runs to a new project used as an archive, leaving a smaller set of runs in your working project.

Section count

Having hundreds of sections in a workspace can hurt performance. Consider creating sections based on high-level groupings of metrics and avoiding an anti-pattern of one section for each metric.

If you find you have too many sections and performance is slow, consider the workspace setting to create sections by prefix rather than suffix, which can result in fewer sections and better performance.

Toggling section creation

File count

Keep the total number of files uploaded for a single run under 1,000. You can use W&B Artifacts when you need to log a large number of files. Exceeding 1,000 files in a single run can slow down your run pages.

Python script performance

There are a few ways that the performance of your python script is reduced:

  1. The size of your data is too large. Large data sizes could introduce a >1 ms overhead to the training loop.
  2. The speed of your network and how the W&B backend is configured
  3. Calling wandb.log more than a few times per second. This is due to a small latency added to the training loop every time wandb.log is called.

W&B does not assert any limits beyond rate limiting. The W&B Python SDK automatically completes an exponential “backoff” and “retry” requests that exceed limits. W&B Python SDK responds with a “Network failure” on the command line. For unpaid accounts, W&B may reach out in extreme cases where usage exceeds reasonable thresholds.

Rate limits

W&B SaaS Cloud API implements a rate limit to maintain system integrity and ensure availability. This measure prevents any single user from monopolizing available resources in the shared infrastructure, ensuring that the service remains accessible to all users. You may encounter a lower rate limit for a variety of reasons.

Rate limit HTTP headers

The preceding table describes rate limit HTTP headers:

Header name Description
RateLimit-Limit The amount of quota available per time window, scaled in the range of 0 to 1000
RateLimit-Remaining The amount of quota in the current rate limit window, scaled in the range of 0 and 1000
RateLimit-Reset The number of seconds until the current quota resets

Rate limits on metric logging API

The wandb.log calls in your script utilize a metrics logging API to log your training data to W&B. This API is engaged through either online or offline syncing. In either case, it imposes a rate limit quota limit in a rolling time window. This includes limits on total request size and request rate, where latter refers to the number of requests in a time duration.

W&B applies rate limits per W&B project. So if you have 3 projects in a team, each project has its own rate limit quota. Users on Teams and Enterprise plans have higher rate limits than those on the Free plan.

When you hit the rate limit while using the metrics logging API, you see a relevant message indicating the error in the standard output.

Suggestions for staying under the metrics logging API rate limit

Exceeding the rate limit may delay run.finish() until the rate limit resets. To avoid this, consider the following strategies:

  • Update your W&B Python SDK version: Ensure you are using the latest version of the W&B Python SDK. The W&B Python SDK is regularly updated and includes enhanced mechanisms for gracefully retrying requests and optimizing quota usage.
  • Reduce metric logging frequency: Minimize the frequency of logging metrics to conserve your quota. For example, you can modify your code to log metrics every five epochs instead of every epoch:
if epoch % 5 == 0:  # Log metrics every 5 epochs
    wandb.log({"acc": accuracy, "loss": loss})
  • Manual data syncing: W&B store your run data locally if you are rate limited. You can manually sync your data with the command wandb sync <run-file-path>. For more details, see the wandb sync reference.

Rate limits on GraphQL API

The W&B Models UI and SDK’s public API make GraphQL requests to the server for querying and modifying data. For all GraphQL requests in SaaS Cloud, W&B applies rate limits per IP address for unauthorized requests and per user for authorized requests. The limit is based on request rate (request per second) within a fixed time window, where your pricing plan determines the default limits. For relevant SDK requests that specify a project path (for example, reports, runs, artifacts), W&B applies rate limits per project, measured by database query time.

Users on Teams and Enterprise plans receive higher rate limits than those on the Free plan. When you hit the rate limit while using the W&B Models SDK’s public API, you see a relevant message indicating the error in the standard output.

Suggestions for staying under the GraphQL API rate limit

If you are fetching a large volume of data using the W&B Models SDK’s public API, consider waiting at least one second between requests. If you receive a 429 status code or see RateLimit-Remaining=0 in the response headers, wait for the number of seconds specified in RateLimit-Reset before retrying.

Browser considerations

The W&B app can be memory-intensive and performs best in Chrome. Depending on your computer’s memory, having W&B active in 3+ tabs at once can cause performance to degrade. If you encounter unexpectedly slow performance, consider closing other tabs or applications.

Reporting performance issues to W&B

W&B takes performance seriously and investigates every report of lag. To expedite investigation, when reporting slow loading times consider invoking W&B’s built-in performance logger that captures key metrics and performance events. Append &PERF_LOGGING to your URL, and share the output of your console.

Adding PERF_LOGGING

1.9 - Import and export data

Import data from MLFlow, export or update data that you have saved to W&B

Export data or import data with W&B Public APIs.

Import data from MLFlow

W&B supports importing data from MLFlow, including experiments, runs, artifacts, metrics, and other metadata.

Install dependencies:

# note: this requires py38+
pip install wandb[importers]

Log in to W&B. Follow the prompts if you have not logged in before.

wandb login

Import all runs from an existing MLFlow server:

from wandb.apis.importers.mlflow import MlflowImporter

importer = MlflowImporter(mlflow_tracking_uri="...")

runs = importer.collect_runs()
importer.import_runs(runs)

By default, importer.collect_runs() collects all runs from the MLFlow server. If you prefer to upload a special subset, you can construct your own runs iterable and pass it to the importer.

import mlflow
from wandb.apis.importers.mlflow import MlflowRun

client = mlflow.tracking.MlflowClient(mlflow_tracking_uri)

runs: Iterable[MlflowRun] = []
for run in mlflow_client.search_runs(...):
    runs.append(MlflowRun(run, client))

importer.import_runs(runs)

To skip importing artifacts, you can pass artifacts=False:

importer.import_runs(runs, artifacts=False)

To import to a specific W&B entity and project, you can pass a Namespace:

from wandb.apis.importers import Namespace

importer.import_runs(runs, namespace=Namespace(entity, project))

Export Data

Use the Public API to export or update data that you have saved to W&B. Before using this API, log data from your script. Check the Quickstart for more details.

Use Cases for the Public API

  • Export Data: Pull down a dataframe for custom analysis in a Jupyter Notebook. Once you have explored the data, you can sync your findings by creating a new analysis run and logging results, for example: wandb.init(job_type="analysis")
  • Update Existing Runs: You can update the data logged in association with a W&B run. For example, you might want to update the config of a set of runs to include additional information, like the architecture or a hyperparameter that wasn’t originally logged.

See the Generated Reference Docs for details on available functions.

Authentication

Authenticate your machine with your API key in one of two ways:

  1. Run wandb login on the command line and paste in your API key.
  2. Set the WANDB_API_KEY environment variable to your API key.

Find the run path

To use the Public API, you’ll often need the run path which is <entity>/<project>/<run_id>. In the app UI, open a run page and click the Overview tab to get the run path.

Export Run Data

Download data from a finished or active run. Common usage includes downloading a dataframe for custom analysis in a Jupyter notebook, or using custom logic in an automated environment.

import wandb

api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")

The most commonly used attributes of a run object are:

Attribute Meaning
run.config A dictionary of the run’s configuration information, such as the hyperparameters for a training run or the preprocessing methods for a run that creates a dataset Artifact. Think of these as the run’s inputs.
run.history() A list of dictionaries meant to store values that change while the model is training such as loss. The command wandb.log() appends to this object.
run.summary A dictionary of information that summarizes the run’s results. This can be scalars like accuracy and loss, or large files. By default, wandb.log() sets the summary to the final value of a logged time series. The contents of the summary can also be set directly. Think of the summary as the run’s outputs.

You can also modify or update the data of past runs. By default a single instance of an api object will cache all network requests. If your use case requires real time information in a running script, call api.flush() to get updated values.

Understanding the Different Attributes

For the below run

n_epochs = 5
config = {"n_epochs": n_epochs}
run = wandb.init(project=project, config=config)
for n in range(run.config.get("n_epochs")):
    run.log(
        {"val": random.randint(0, 1000), "loss": (random.randint(0, 1000) / 1000.00)}
    )
run.finish()

these are the different outputs for the above run object attributes

run.config

{"n_epochs": 5}

run.history()

   _step  val   loss  _runtime  _timestamp
0      0  500  0.244         4  1644345412
1      1   45  0.521         4  1644345412
2      2  240  0.785         4  1644345412
3      3   31  0.305         4  1644345412
4      4  525  0.041         4  1644345412

run.summary

{
    "_runtime": 4,
    "_step": 4,
    "_timestamp": 1644345412,
    "_wandb": {"runtime": 3},
    "loss": 0.041,
    "val": 525,
}

Sampling

The default history method samples the metrics to a fixed number of samples (the default is 500, you can change this with the samples __ argument). If you want to export all of the data on a large run, you can use the run.scan_history() method. For more details see the API Reference.

Querying Multiple Runs

This example script finds a project and outputs a CSV of runs with name, configs and summary stats. Replace <entity> and <project> with your W&B entity and the name of your project, respectively.

import pandas as pd
import wandb

api = wandb.Api()
entity, project = "<entity>", "<project>"
runs = api.runs(entity + "/" + project)

summary_list, config_list, name_list = [], [], []
for run in runs:
    # .summary contains output keys/values for
    # metrics such as accuracy.
    #  We call ._json_dict to omit large files
    summary_list.append(run.summary._json_dict)

    # .config contains the hyperparameters.
    #  We remove special values that start with _.
    config_list.append({k: v for k, v in run.config.items() if not k.startswith("_")})

    # .name is the human-readable name of the run.
    name_list.append(run.name)

runs_df = pd.DataFrame(
    {"summary": summary_list, "config": config_list, "name": name_list}
)

runs_df.to_csv("project.csv")

The W&B API also provides a way for you to query across runs in a project with api.runs(). The most common use case is exporting runs data for custom analysis. The query interface is the same as the one MongoDB uses.

runs = api.runs(
    "username/project",
    {"$or": [{"config.experiment_name": "foo"}, {"config.experiment_name": "bar"}]},
)
print(f"Found {len(runs)} runs")

Calling api.runs returns a Runs object that is iterable and acts like a list. By default the object loads 50 runs at a time in sequence as required, but you can change the number loaded per page with the per_page keyword argument.

api.runs also accepts an order keyword argument. The default order is -created_at. To order results ascending, specify +created_at. You can also sort by config or summary values. For example, summary.val_acc or config.experiment_name.

Error Handling

If errors occur while talking to W&B servers a wandb.CommError will be raised. The original exception can be introspected via the exc attribute.

Get the latest git commit through the API

In the UI, click on a run and then click the Overview tab on the run page to see the latest git commit. It’s also in the file wandb-metadata.json . Using the public API, you can get the git hash with run.commit.

Get a run’s name and ID during a run

After calling wandb.init() you can access the random run ID or the human readable run name from your script like this:

  • Unique run ID (8 character hash): wandb.run.id
  • Random run name (human readable): wandb.run.name

If you’re thinking about ways to set useful identifiers for your runs, here’s what we recommend:

  • Run ID: leave it as the generated hash. This needs to be unique across runs in your project.
  • Run name: This should be something short, readable, and preferably unique so that you can tell the difference between different lines on your charts.
  • Run notes: This is a great place to put a quick description of what you’re doing in your run. You can set this with wandb.init(notes="your notes here")
  • Run tags: Track things dynamically in run tags, and use filters in the UI to filter your table down to just the runs you care about. You can set tags from your script and then edit them in the UI, both in the runs table and the overview tab of the run page. See the detailed instructions here.

Public API Examples

Export data to visualize in matplotlib or seaborn

Check out our API examples for some common export patterns. You can also click the download button on a custom plot or on the expanded runs table to download a CSV from your browser.

Read metrics from a run

This example outputs timestamp and accuracy saved with wandb.log({"accuracy": acc}) for a run saved to "<entity>/<project>/<run_id>".

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
if run.state == "finished":
    for i, row in run.history().iterrows():
        print(row["_timestamp"], row["accuracy"])

Filter runs

You can filters by using the MongoDB Query Language.

Date

runs = api.runs(
    "<entity>/<project>",
    {"$and": [{"created_at": {"$lt": "YYYY-MM-DDT##", "$gt": "YYYY-MM-DDT##"}}]},
)

Read specific metrics from a run

To pull specific metrics from a run, use the keys argument. The default number of samples when using run.history() is 500. Logged steps that do not include a specific metric will appear in the output dataframe as NaN. The keys argument will cause the API to sample steps that include the listed metric keys more frequently.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
if run.state == "finished":
    for i, row in run.history(keys=["accuracy"]).iterrows():
        print(row["_timestamp"], row["accuracy"])

Compare two runs

This will output the config parameters that are different between run1 and run2.

import pandas as pd
import wandb

api = wandb.Api()

# replace with your <entity>, <project>, and <run_id>
run1 = api.run("<entity>/<project>/<run_id>")
run2 = api.run("<entity>/<project>/<run_id>")


df = pd.DataFrame([run1.config, run2.config]).transpose()

df.columns = [run1.name, run2.name]
print(df[df[run1.name] != df[run2.name]])

Outputs:

              c_10_sgd_0.025_0.01_long_switch base_adam_4_conv_2fc
batch_size                                 32                   16
n_conv_layers                               5                    4
optimizer                             rmsprop                 adam

Update metrics for a run, after the run has finished

This example sets the accuracy of a previous run to 0.9. It also modifies the accuracy histogram of a previous run to be the histogram of numpy_array.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
run.summary["accuracy"] = 0.9
run.summary["accuracy_histogram"] = wandb.Histogram(numpy_array)
run.summary.update()

Rename a metric in a run, after the run has finished

This example renames a summary column in your tables.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
run.summary["new_name"] = run.summary["old_name"]
del run.summary["old_name"]
run.summary.update()

Update config for an existing run

This examples updates one of your configuration settings.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
run.config["key"] = updated_value
run.update()

Export system resource consumptions to a CSV file

The snippet below would find the system resource consumptions and then, save them to a CSV.

import wandb

run = wandb.Api().run("<entity>/<project>/<run_id>")

system_metrics = run.history(stream="events")
system_metrics.to_csv("sys_metrics.csv")

Get unsampled metric data

When you pull data from history, by default it’s sampled to 500 points. Get all the logged data points using run.scan_history(). Here’s an example downloading all the loss data points logged in history.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
history = run.scan_history()
losses = [row["loss"] for row in history]

Get paginated data from history

If metrics are being fetched slowly on our backend or API requests are timing out, you can try lowering the page size in scan_history so that individual requests don’t time out. The default page size is 500, so you can experiment with different sizes to see what works best:

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
run.scan_history(keys=sorted(cols), page_size=100)

Export metrics from all runs in a project to a CSV file

This script pulls down the runs in a project and produces a dataframe and a CSV of runs including their names, configs, and summary stats. Replace <entity> and <project> with your W&B entity and the name of your project, respectively.

import pandas as pd
import wandb

api = wandb.Api()
entity, project = "<entity>", "<project>"
runs = api.runs(entity + "/" + project)

summary_list, config_list, name_list = [], [], []
for run in runs:
    # .summary contains the output keys/values
    #  for metrics such as accuracy.
    #  We call ._json_dict to omit large files
    summary_list.append(run.summary._json_dict)

    # .config contains the hyperparameters.
    #  We remove special values that start with _.
    config_list.append({k: v for k, v in run.config.items() if not k.startswith("_")})

    # .name is the human-readable name of the run.
    name_list.append(run.name)

runs_df = pd.DataFrame(
    {"summary": summary_list, "config": config_list, "name": name_list}
)

runs_df.to_csv("project.csv")

Get the starting time for a run

This code snippet retrieves the time at which the run was created.

import wandb

api = wandb.Api()

run = api.run("entity/project/run_id")
start_time = run.created_at

Upload files to a finished run

The code snippet below uploads a selected file to a finished run.

import wandb

api = wandb.Api()

run = api.run("entity/project/run_id")
run.upload_file("file_name.extension")

Download a file from a run

This finds the file “model-best.h5” associated with run ID uxte44z7 in the cifar project and saves it locally.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
run.file("model-best.h5").download()

Download all files from a run

This finds all files associated with a run and saves them locally.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
for file in run.files():
    file.download()

Get runs from a specific sweep

This snippet downloads all the runs associated with a particular sweep.

import wandb

api = wandb.Api()

sweep = api.sweep("<entity>/<project>/<sweep_id>")
sweep_runs = sweep.runs

Get the best run from a sweep

The following snippet gets the best run from a given sweep.

import wandb

api = wandb.Api()

sweep = api.sweep("<entity>/<project>/<sweep_id>")
best_run = sweep.best_run()

The best_run is the run with the best metric as defined by the metric parameter in the sweep config.

Download the best model file from a sweep

This snippet downloads the model file with the highest validation accuracy from a sweep with runs that saved model files to model.h5.

import wandb

api = wandb.Api()

sweep = api.sweep("<entity>/<project>/<sweep_id>")
runs = sorted(sweep.runs, key=lambda run: run.summary.get("val_acc", 0), reverse=True)
val_acc = runs[0].summary.get("val_acc", 0)
print(f"Best run {runs[0].name} with {val_acc}% val accuracy")

runs[0].file("model.h5").download(replace=True)
print("Best model saved to model-best.h5")

Delete all files with a given extension from a run

This snippet deletes files with a given extension from a run.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")

extension = ".png"
files = run.files()
for file in files:
    if file.name.endswith(extension):
        file.delete()

Download system metrics data

This snippet produces a dataframe with all the system resource consumption metrics for a run and then saves it to a CSV.

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")
system_metrics = run.history(stream="events")
system_metrics.to_csv("sys_metrics.csv")

Update summary metrics

You can pass a dictionary to update summary metrics.

summary.update({"key": val})

Get the command that ran the run

Each run captures the command that launched it on the run overview page. To pull this command down from the API, you can run:

import wandb

api = wandb.Api()

run = api.run("<entity>/<project>/<run_id>")

meta = json.load(run.file("wandb-metadata.json").download())
program = ["python"] + [meta["program"]] + meta["args"]

1.10 - Environment variables

Set W&B environment variables.

When you’re running a script in an automated environment, you can control wandb with environment variables set before the script runs or within the script.

# This is secret and shouldn't be checked into version control
WANDB_API_KEY=$YOUR_API_KEY
# Name and notes optional
WANDB_NAME="My first run"
WANDB_NOTES="Smaller learning rate, more regularization."
# Only needed if you don't check in the wandb/settings file
WANDB_ENTITY=$username
WANDB_PROJECT=$project
# If you don't want your script to sync to the cloud
os.environ["WANDB_MODE"] = "offline"

Optional environment variables

Use these optional environment variables to do things like set up authentication on remote machines.

Variable name Usage
WANDB_ANONYMOUS Set this to allow, never, or must to let users create anonymous runs with secret urls.
WANDB_API_KEY Sets the authentication key associated with your account. You can find your key on your settings page. This must be set if wandb login hasn’t been run on the remote machine.
WANDB_BASE_URL If you’re using wandb/local you should set this environment variable to http://YOUR_IP:YOUR_PORT
WANDB_CACHE_DIR This defaults to ~/.cache/wandb, you can override this location with this environment variable
WANDB_CONFIG_DIR This defaults to ~/.config/wandb, you can override this location with this environment variable
WANDB_CONFIG_PATHS Comma separated list of yaml files to load into wandb.config. See config.
WANDB_CONSOLE Set this to “off” to disable stdout / stderr logging. This defaults to “on” in environments that support it.
WANDB_DIR Set this to an absolute path to store all generated files here instead of the wandb directory relative to your training script. be sure this directory exists and the user your process runs as can write to it
WANDB_DISABLE_GIT Prevent wandb from probing for a git repository and capturing the latest commit / diff.
WANDB_DISABLE_CODE Set this to true to prevent wandb from saving notebooks or git diffs. We’ll still save the current commit if we’re in a git repo.
WANDB_DOCKER Set this to a docker image digest to enable restoring of runs. This is set automatically with the wandb docker command. You can obtain an image digest by running wandb docker my/image/name:tag --digest
WANDB_ENTITY The entity associated with your run. If you have run wandb init in the directory of your training script, it will create a directory named wandb and will save a default entity which can be checked into source control. If you don’t want to create that file or want to override the file you can use the environmental variable.
WANDB_ERROR_REPORTING Set this to false to prevent wandb from logging fatal errors to its error tracking system.
WANDB_HOST Set this to the hostname you want to see in the wandb interface if you don’t want to use the system provided hostname
WANDB_IGNORE_GLOBS Set this to a comma separated list of file globs to ignore. These files will not be synced to the cloud.
WANDB_JOB_NAME Specify a name for any jobs created by wandb.
WANDB_JOB_TYPE Specify the job type, like “training” or “evaluation” to indicate different types of runs. See grouping for more info.
WANDB_MODE If you set this to “offline” wandb will save your run metadata locally and not sync to the server. If you set this to disabled wandb will turn off completely.
WANDB_NAME The human-readable name of your run. If not set it will be randomly generated for you
WANDB_NOTEBOOK_NAME If you’re running in jupyter you can set the name of the notebook with this variable. We attempt to auto detect this.
WANDB_NOTES Longer notes about your run. Markdown is allowed and you can edit this later in the UI.
WANDB_PROJECT The project associated with your run. This can also be set with wandb init, but the environmental variable will override the value.
WANDB_RESUME By default this is set to never. If set to auto wandb will automatically resume failed runs. If set to must forces the run to exist on startup. If you want to always generate your own unique ids, set this to allow and always set WANDB_RUN_ID.
WANDB_RUN_GROUP Specify the experiment name to automatically group runs together. See grouping for more info.
WANDB_RUN_ID Set this to a globally unique string (per project) corresponding to a single run of your script. It must be no longer than 64 characters. All non-word characters will be converted to dashes. This can be used to resume an existing run in cases of failure.
WANDB_SILENT Set this to true to silence wandb log statements. If this is set all logs will be written to WANDB_DIR/debug.log
WANDB_SHOW_RUN Set this to true to automatically open a browser with the run url if your operating system supports it.
WANDB_TAGS A comma separated list of tags to be applied to the run.
WANDB_USERNAME The username of a member of your team associated with the run. This can be used along with a service account API key to enable attribution of automated runs to members of your team.
WANDB_USER_EMAIL The email of a member of your team associated with the run. This can be used along with a service account API key to enable attribution of automated runs to members of your team.

Singularity environments

If you’re running containers in Singularity you can pass environment variables by pre-pending the above variables with SINGULARITYENV_. More details about Singularity environment variables can be found here.

Running on AWS

If you’re running batch jobs in AWS, it’s easy to authenticate your machines with your W&B credentials. Get your API key from your settings page, and set the WANDB_API_KEY environment variable in the AWS batch job spec.

2 - Sweeps

Hyperparameter search and model optimization with W&B Sweeps

Use W&B Sweeps to automate hyperparameter search and visualize rich, interactive experiment tracking. Pick from popular search methods such as Bayesian, grid search, and random to search the hyperparameter space. Scale and parallelize sweep across one or more machines.

Draw insights from large hyperparameter tuning experiments with interactive dashboards.

How it works

Create a sweep with two W&B CLI commands:

  1. Initialize a sweep
wandb sweep --project <propject-name> <path-to-config file>
  1. Start the sweep agent
wandb agent <sweep-ID>

How to get started

Depending on your use case, explore the following resources to get started with W&B Sweeps:

For a step-by-step video, see: Tune Hyperparameters Easily with W&B Sweeps.

2.1 - Tutorial: Define, initialize, and run a sweep

Sweeps quickstart shows how to define, initialize, and run a sweep. There are four main steps

This page shows how to define, initialize, and run a sweep. There are four main steps:

  1. Set up your training code
  2. Define the search space with a sweep configuration
  3. Initialize the sweep
  4. Start the sweep agent

Copy and paste the following code into a Jupyter Notebook or Python script:

# Import the W&B Python Library and log into W&B
import wandb

wandb.login()

# 1: Define objective/training function
def objective(config):
    score = config.x**3 + config.y
    return score

def main():
    wandb.init(project="my-first-sweep")
    score = objective(wandb.config)
    wandb.log({"score": score})

# 2: Define the search space
sweep_configuration = {
    "method": "random",
    "metric": {"goal": "minimize", "name": "score"},
    "parameters": {
        "x": {"max": 0.1, "min": 0.01},
        "y": {"values": [1, 3, 7]},
    },
}

# 3: Start the sweep
sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep")

wandb.agent(sweep_id, function=main, count=10)

The following sections break down and explains each step in the code sample.

Set up your training code

Define a training function that takes in hyperparameter values from wandb.config and uses them to train a model and return metrics.

Optionally provide the name of the project where you want the output of the W&B Run to be stored (project parameter in wandb.init). If the project is not specified, the run is put in an “Uncategorized” project.

# 1: Define objective/training function
def objective(config):
    score = config.x**3 + config.y
    return score


def main():
    wandb.init(project="my-first-sweep")
    score = objective(wandb.config)
    wandb.log({"score": score})

Define the search space with a sweep configuration

Within a dictionary, specify what hyperparameters you want to sweep over and. For more information about configuration options, see Define sweep configuration.

The proceeding example demonstrates a sweep configuration that uses a random search ('method':'random'). The sweep will randomly select a random set of values listed in the configuration for the batch size, epoch, and the learning rate.

Throughout the sweeps, W&B will maximize the metric specified in the metric key (metric). In the following example, W&B will maximize ('goal':'maximize') the validation accuracy ('val_acc').

# 2: Define the search space
sweep_configuration = {
    "method": "random",
    "metric": {"goal": "minimize", "name": "score"},
    "parameters": {
        "x": {"max": 0.1, "min": 0.01},
        "y": {"values": [1, 3, 7]},
    },
}

Initialize the Sweep

W&B uses a Sweep Controller to manage sweeps on the cloud (standard), locally (local) across one or more machines. For more information about Sweep Controllers, see Search and stop algorithms locally.

A sweep identification number is returned when you initialize a sweep:

sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep")

For more information about initializing sweeps, see Initialize sweeps.

Start the Sweep

Use the wandb.agent API call to start a sweep.

wandb.agent(sweep_id, function=main, count=10)

Visualize results (optional)

Open your project to see your live results in the W&B App dashboard. With just a few clicks, construct rich, interactive charts like parallel coordinates plots, parameter importance analyzes, and more.

Sweeps Dashboard example

For more information about how to visualize results, see Visualize sweep results. For an example dashboard, see this sample Sweeps Project.

Stop the agent (optional)

From the terminal, hit Ctrl+c to stop the run that the Sweep agent is currently running. To kill the agent, hit Ctrl+c again after the run is stopped.

2.2 - Add W&B (wandb) to your code

Add W&B to your Python code script or Jupyter Notebook.

There are numerous ways to add the W&B Python SDK to your script or Jupyter Notebook. Outlined below is a “best practice” example of how to integrate the W&B Python SDK into your own code.

Original training script

Suppose you have the following code in a Jupyter Notebook cell or Python script. We define a function called main that mimics a typical training loop. For each epoch, the accuracy and loss is computed on the training and validation data sets. The values are randomly generated for the purpose of this example.

We defined a dictionary called config where we store hyperparameters values (line 15). At the end of the cell, we call the main function to execute the mock training code.

# train.py
import random
import numpy as np


def train_one_epoch(epoch, lr, bs):
    acc = 0.25 + ((epoch / 30) + (random.random() / 10))
    loss = 0.2 + (1 - ((epoch - 1) / 10 + random.random() / 5))
    return acc, loss


def evaluate_one_epoch(epoch):
    acc = 0.1 + ((epoch / 20) + (random.random() / 10))
    loss = 0.25 + (1 - ((epoch - 1) / 10 + random.random() / 6))
    return acc, loss


config = {"lr": 0.0001, "bs": 16, "epochs": 5}


def main():
    # Note that we define values from `wandb.config`
    # instead of defining hard values
    lr = config["lr"]
    bs = config["bs"]
    epochs = config["epochs"]

    for epoch in np.arange(1, epochs):
        train_acc, train_loss = train_one_epoch(epoch, lr, bs)
        val_acc, val_loss = evaluate_one_epoch(epoch)

        print("epoch: ", epoch)
        print("training accuracy:", train_acc, "training loss:", train_loss)
        print("validation accuracy:", val_acc, "training loss:", val_loss)


# Call the main function.
main()

Training script with W&B Python SDK

The following code examples demonstrate how to add the W&B Python SDK into your code. If you start W&B Sweep jobs in the CLI, you will want to explore the CLI tab. If you start W&B Sweep jobs within a Jupyter notebook or Python script, explore the Python SDK tab.

To create a W&B Sweep, we added the following to the code example:

  1. Line 1: Import the Weights & Biases Python SDK.
  2. Line 6: Create a dictionary object where the key-value pairs define the sweep configuration. In the proceeding example, the batch size (batch_size), epochs (epochs), and the learning rate (lr) hyperparameters are varied during each sweep. For more information on how to create a sweep configuration, see Define sweep configuration.
  3. Line 19: Pass the sweep configuration dictionary to wandb.sweep. This initializes the sweep. This returns a sweep ID (sweep_id). For more information on how to initialize sweeps, see Initialize sweeps.
  4. Line 33: Use the wandb.init() API to generate a background process to sync and log data as a W&B Run.
  5. Line 37-39: (Optional) define values from wandb.config instead of defining hard coded values.
  6. Line 45: Log the metric we want to optimize with wandb.log. You must log the metric defined in your configuration. Within the configuration dictionary (sweep_configuration in this example) we defined the sweep to maximize the val_acc value).
  7. Line 54: Start the sweep with the wandb.agent API call. Provide the sweep ID (line 19), the name of the function the sweep will execute (function=main), and set the maximum number of runs to try to four (count=4). For more information on how to start W&B Sweep, see Start sweep agents.
import wandb
import numpy as np
import random

# Define sweep config
sweep_configuration = {
    "method": "random",
    "name": "sweep",
    "metric": {"goal": "maximize", "name": "val_acc"},
    "parameters": {
        "batch_size": {"values": [16, 32, 64]},
        "epochs": {"values": [5, 10, 15]},
        "lr": {"max": 0.1, "min": 0.0001},
    },
}

# Initialize sweep by passing in config.
# (Optional) Provide a name of the project.
sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep")


# Define training function that takes in hyperparameter
# values from `wandb.config` and uses them to train a
# model and return metric
def train_one_epoch(epoch, lr, bs):
    acc = 0.25 + ((epoch / 30) + (random.random() / 10))
    loss = 0.2 + (1 - ((epoch - 1) / 10 + random.random() / 5))
    return acc, loss


def evaluate_one_epoch(epoch):
    acc = 0.1 + ((epoch / 20) + (random.random() / 10))
    loss = 0.25 + (1 - ((epoch - 1) / 10 + random.random() / 6))
    return acc, loss


def main():
    run = wandb.init()

    # note that we define values from `wandb.config`
    # instead of defining hard values
    lr = wandb.config.lr
    bs = wandb.config.batch_size
    epochs = wandb.config.epochs

    for epoch in np.arange(1, epochs):
        train_acc, train_loss = train_one_epoch(epoch, lr, bs)
        val_acc, val_loss = evaluate_one_epoch(epoch)

        wandb.log(
            {
                "epoch": epoch,
                "train_acc": train_acc,
                "train_loss": train_loss,
                "val_acc": val_acc,
                "val_loss": val_loss,
            }
        )


# Start sweep job.
wandb.agent(sweep_id, function=main, count=4)

To create a W&B Sweep, we first create a YAML configuration file. The configuration file contains he hyperparameters we want the sweep to explore. In the proceeding example, the batch size (batch_size), epochs (epochs), and the learning rate (lr) hyperparameters are varied during each sweep.

# config.yaml
program: train.py
method: random
name: sweep
metric:
  goal: maximize
  name: val_acc
parameters:
  batch_size: 
    values: [16,32,64]
  lr:
    min: 0.0001
    max: 0.1
  epochs:
    values: [5, 10, 15]

For more information on how to create a W&B Sweep configuration, see Define sweep configuration.

Note that you must provide the name of your Python script for the program key in your YAML file.

Next, we add the following to the code example:

  1. Line 1-2: Import the Wieghts & Biases Python SDK (wandb) and PyYAML (yaml). PyYAML is used to read in our YAML configuration file.
  2. Line 18: Read in the configuration file.
  3. Line 21: Use the wandb.init() API to generate a background process to sync and log data as a W&B Run. We pass the config object to the config parameter.
  4. Line 25 - 27: Define hyperparameter values from wandb.config instead of using hard coded values.
  5. Line 33-39: Log the metric we want to optimize with wandb.log. You must log the metric defined in your configuration. Within the configuration dictionary (sweep_configuration in this example) we defined the sweep to maximize the val_acc value.
import wandb
import yaml
import random
import numpy as np


def train_one_epoch(epoch, lr, bs):
    acc = 0.25 + ((epoch / 30) + (random.random() / 10))
    loss = 0.2 + (1 - ((epoch - 1) / 10 + random.random() / 5))
    return acc, loss


def evaluate_one_epoch(epoch):
    acc = 0.1 + ((epoch / 20) + (random.random() / 10))
    loss = 0.25 + (1 - ((epoch - 1) / 10 + random.random() / 6))
    return acc, loss


def main():
    # Set up your default hyperparameters
    with open("./config.yaml") as file:
        config = yaml.load(file, Loader=yaml.FullLoader)

    run = wandb.init(config=config)

    # Note that we define values from `wandb.config`
    # instead of  defining hard values
    lr = wandb.config.lr
    bs = wandb.config.batch_size
    epochs = wandb.config.epochs

    for epoch in np.arange(1, epochs):
        train_acc, train_loss = train_one_epoch(epoch, lr, bs)
        val_acc, val_loss = evaluate_one_epoch(epoch)

        wandb.log(
            {
                "epoch": epoch,
                "train_acc": train_acc,
                "train_loss": train_loss,
                "val_acc": val_acc,
                "val_loss": val_loss,
            }
        )


# Call the main function.
main()

Navigate to your CLI. Within your CLI, set a maximum number of runs the sweep agent should try. This is step optional. In the following example we set the maximum number to five.

NUM=5

Next, initialize the sweep with the wandb sweep command. Provide the name of the YAML file. Optionally provide the name of the project for the project flag (--project):

wandb sweep --project sweep-demo-cli config.yaml

This returns a sweep ID. For more information on how to initialize sweeps, see Initialize sweeps.

Copy the sweep ID and replace sweepID in the proceeding code snippet to start the sweep job with the wandb agent command:

wandb agent --count $NUM your-entity/sweep-demo-cli/sweepID

For more information on how to start sweep jobs, see Start sweep jobs.

Consideration when logging metrics

Ensure to log the metric you specify in your sweep configuration explicitly to W&B. Do not log metrics for your sweep inside of a sub-directory.

For example, consider the proceeding psuedocode. A user wants to log the validation loss ("val_loss": loss). First they pass the values into a dictionary (line 16). However, the dictionary passed to wandb.log does not explicitly access the key-value pair in the dictionary:

# Import the W&B Python Library and log into W&B
import wandb
import random


def train():
    offset = random.random() / 5
    acc = 1 - 2**-epoch - random.random() / epoch - offset
    loss = 2**-epoch + random.random() / epoch + offset

    val_metrics = {"val_loss": loss, "val_acc": acc}
    return val_metrics


def main():
    wandb.init(entity="<entity>", project="my-first-sweep")
    val_metrics = train()
    # highlight-next-line
    wandb.log({"val_loss": val_metrics})


sweep_configuration = {
    "method": "random",
    "metric": {"goal": "minimize", "name": "val_loss"},
    "parameters": {
        "x": {"max": 0.1, "min": 0.01},
        "y": {"values": [1, 3, 7]},
    },
}

sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep")

wandb.agent(sweep_id, function=main, count=10)

Instead, explicitly access the key-value pair within the Python dictionary. For example, the proceeding code (line after you create a dictionary, specify the key-value pair when you pass the dictionary to the wandb.log method:

# Import the W&B Python Library and log into W&B
import wandb
import random


def train():
    offset = random.random() / 5
    acc = 1 - 2**-epoch - random.random() / epoch - offset
    loss = 2**-epoch + random.random() / epoch + offset

    val_metrics = {"val_loss": loss, "val_acc": acc}
    return val_metrics


def main():
    wandb.init(entity="<entity>", project="my-first-sweep")
    val_metrics = train()
    # highlight-next-line
    wandb.log({"val_loss", val_metrics["val_loss"]})


sweep_configuration = {
    "method": "random",
    "metric": {"goal": "minimize", "name": "val_loss"},
    "parameters": {
        "x": {"max": 0.1, "min": 0.01},
        "y": {"values": [1, 3, 7]},
    },
}

sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep")

wandb.agent(sweep_id, function=main, count=10)

2.3 - Define a sweep configuration

Learn how to create configuration files for sweeps.

A W&B Sweep combines a strategy for exploring hyperparameter values with the code that evaluates them. The strategy can be as simple as trying every option or as complex as Bayesian Optimization and Hyperband (BOHB).

Define a sweep configuration either in a Python dictionary or a YAML file. How you define your sweep configuration depends on how you want to manage your sweep.

The following guide describes how to format your sweep configuration. See Sweep configuration options for a comprehensive list of top-level sweep configuration keys.

Basic structure

Both sweep configuration format options (YAML and Python dictionary) utilize key-value pairs and nested structures.

Use top-level keys within your sweep configuration to define qualities of your sweep search such as the name of the sweep (name key), the parameters to search through (parameters key), the methodology to search the parameter space (method key), and more.

For example, the proceeding code snippets show the same sweep configuration defined within a YAML file and within a Python dictionary. Within the sweep configuration there are five top level keys specified: program, name, method, metric and parameters.

Define a sweep configuration in a YAML file if you want to manage sweeps interactively from the command line (CLI)

program: train.py
name: sweepdemo
method: bayes
metric:
  goal: minimize
  name: validation_loss
parameters:
  learning_rate:
    min: 0.0001
    max: 0.1
  batch_size:
    values: [16, 32, 64]
  epochs:
    values: [5, 10, 15]
  optimizer:
    values: ["adam", "sgd"]

Define a sweep in a Python dictionary data structure if you define training algorithm in a Python script or Jupyter notebook.

The proceeding code snippet stores a sweep configuration in a variable named sweep_configuration:

sweep_configuration = {
    "name": "sweepdemo",
    "method": "bayes",
    "metric": {"goal": "minimize", "name": "validation_loss"},
    "parameters": {
        "learning_rate": {"min": 0.0001, "max": 0.1},
        "batch_size": {"values": [16, 32, 64]},
        "epochs": {"values": [5, 10, 15]},
        "optimizer": {"values": ["adam", "sgd"]},
    },
}

Within the top level parameters key, the following keys are nested: learning_rate, batch_size, epoch, and optimizer. For each of the nested keys you specify, you can provide one or more values, a distribution, a probability, and more. For more information, see the parameters section in Sweep configuration options.

Double nested parameters

Sweep configurations support nested parameters. To delineate a nested parameter, use an additional parameters key under the top level parameter name. Sweep configs support multi-level nesting.

Specify a probability distribution for your random variables if you use a Bayesian or random hyperparameter search. For each hyperparameter:

  1. Create a top level parameters key in your sweep config.
  2. Within the parameterskey, nest the following:
    1. Specify the name of hyperparameter you want to optimize.
    2. Specify the distribution you want to use for the distribution key. Nest the distribution key-value pair underneath the hyperparameter name.
    3. Specify one or more values to explore. The value (or values) should be inline with the distribution key.
      1. (Optional) Use an additional parameters key under the top level parameter name to delineate a nested parameter.

Sweep configuration template

The following template shows how you can configure parameters and specify search constraints. Replace hyperparameter_name with the name of your hyperparameter and any values enclosed in <>.

program: <insert>
method: <insert>
parameter:
  hyperparameter_name0:
    value: 0  
  hyperparameter_name1: 
    values: [0, 0, 0]
  hyperparameter_name: 
    distribution: <insert>
    value: <insert>
  hyperparameter_name2:  
    distribution: <insert>
    min: <insert>
    max: <insert>
    q: <insert>
  hyperparameter_name3: 
    distribution: <insert>
    values:
      - <list_of_values>
      - <list_of_values>
      - <list_of_values>
early_terminate:
  type: hyperband
  s: 0
  eta: 0
  max_iter: 0
command:
- ${Command macro}
- ${Command macro}
- ${Command macro}
- ${Command macro}      

Sweep configuration examples

program: train.py
method: random
metric:
  goal: minimize
  name: loss
parameters:
  batch_size:
    distribution: q_log_uniform_values
    max: 256 
    min: 32
    q: 8
  dropout: 
    values: [0.3, 0.4, 0.5]
  epochs:
    value: 1
  fc_layer_size: 
    values: [128, 256, 512]
  learning_rate:
    distribution: uniform
    max: 0.1
    min: 0
  optimizer:
    values: ["adam", "sgd"]
sweep_config = {
    "method": "random",
    "metric": {"goal": "minimize", "name": "loss"},
    "parameters": {
        "batch_size": {
            "distribution": "q_log_uniform_values",
            "max": 256,
            "min": 32,
            "q": 8,
        },
        "dropout": {"values": [0.3, 0.4, 0.5]},
        "epochs": {"value": 1},
        "fc_layer_size": {"values": [128, 256, 512]},
        "learning_rate": {"distribution": "uniform", "max": 0.1, "min": 0},
        "optimizer": {"values": ["adam", "sgd"]},
    },
}

Bayes hyperband example

program: train.py
method: bayes
metric:
  goal: minimize
  name: val_loss
parameters:
  dropout:
    values: [0.15, 0.2, 0.25, 0.3, 0.4]
  hidden_layer_size:
    values: [96, 128, 148]
  layer_1_size:
    values: [10, 12, 14, 16, 18, 20]
  layer_2_size:
    values: [24, 28, 32, 36, 40, 44]
  learn_rate:
    values: [0.001, 0.01, 0.003]
  decay:
    values: [1e-5, 1e-6, 1e-7]
  momentum:
    values: [0.8, 0.9, 0.95]
  epochs:
    value: 27
early_terminate:
  type: hyperband
  s: 2
  eta: 3
  max_iter: 27

The proceeding tabs show how to specify either a minimum or maximum number of iterations for early_terminate:

early_terminate:
  type: hyperband
  min_iter: 3

The brackets for this example are: [3, 3*eta, 3*eta*eta, 3*eta*eta*eta], which equals [3, 9, 27, 81].

</div>
<div class="tab-body tab-pane fade"
    id="tabs-10-01" role="tabpanel" aria-labelled-by="tabs-10-01-tab" tabindex="10">
    <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">early_terminate</span>:

type: hyperband max_iter: 27 s: 2

The brackets for this example are [27/eta, 27/eta/eta], which equals [9, 3].

</div>

Command example

program: main.py
metric:
  name: val_loss
  goal: minimize

method: bayes
parameters:
  optimizer.config.learning_rate:
    min: !!float 1e-5
    max: 0.1
  experiment:
    values: [expt001, expt002]
  optimizer:
    values: [sgd, adagrad, adam]

command:
- ${env}
- ${interpreter}
- ${program}
- ${args_no_hyphens}
/usr/bin/env python train.py --param1=value1 --param2=value2
python train.py --param1=value1 --param2=value2

The proceeding tabs show how to specify common command macros:

Remove the {$interpreter} macro and provide a value explicitly to hardcode the python interpreter. For example, the following code snippet demonstrates how to do this:

command:
  - ${env}
  - python3
  - ${program}
  - ${args}

The following shows how to add extra command line arguments not specified by sweep configuration parameters:

command:
  - ${env}
  - ${interpreter}
  - ${program}
  - "--config"
  - "your-training-config.json"
  - ${args}

If your program does not use argument parsing you can avoid passing arguments all together and take advantage of wandb.init picking up sweep parameters into wandb.config automatically:

command:
  - ${env}
  - ${interpreter}
  - ${program}

You can change the command to pass arguments the way tools like Hydra expect. See Hydra with W&B for more information.

command:
  - ${env}
  - ${interpreter}
  - ${program}
  - ${args_no_hyphens}

2.3.1 - Sweep configuration options

A sweep configuration consists of nested key-value pairs. Use top-level keys within your sweep configuration to define qualities of your sweep search such as the parameters to search through (parameter key), the methodology to search the parameter space (method key), and more.

The proceeding table lists top-level sweep configuration keys and a brief description. See the respective sections for more information about each key.

Top-level keys Description
program (required) Training script to run
entity The entity for this sweep
project The project for this sweep
description Text description of the sweep
name The name of the sweep, displayed in the W&B UI.
method (required) The search strategy
metric The metric to optimize (only used by certain search strategies and stopping criteria)
parameters (required) Parameter bounds to search
early_terminate Any early stopping criteria
command Command structure for invoking and passing arguments to the training script
run_cap Maximum number of runs for this sweep

See the Sweep configuration structure for more information on how to structure your sweep configuration.

metric

Use the metric top-level sweep configuration key to specify the name, the goal, and the target metric to optimize.

Key Description
name Name of the metric to optimize.
goal Either minimize or maximize (Default is minimize).
target Goal value for the metric you are optimizing. The sweep does not create new runs when if or when a run reaches a target value that you specify. Active agents that have a run executing (when the run reaches the target) wait until the run completes before the agent stops creating new runs.

parameters

In your YAML file or Python script, specify parameters as a top level key. Within the parameters key, provide the name of a hyperparameter you want to optimize. Common hyperparameters include: learning rate, batch size, epochs, optimizers, and more. For each hyperparameter you define in your sweep configuration, specify one or more search constraints.

The proceeding table shows supported hyperparameter search constraints. Based on your hyperparameter and use case, use one of the search constraints below to tell your sweep agent where (in the case of a distribution) or what (value, values, and so forth) to search or use.

Search constraint Description
values Specifies all valid values for this hyperparameter. Compatible with grid.
value Specifies the single valid value for this hyperparameter. Compatible with grid.
distribution Specify a probability distribution. See the note following this table for information on default values.
probabilities Specify the probability of selecting each element of values when using random.
min, max (intor float) Maximum and minimum values. If int, for int_uniform -distributed hyperparameters. If float, for uniform -distributed hyperparameters.
mu (float) Mean parameter for normal - or lognormal -distributed hyperparameters.
sigma (float) Standard deviation parameter for normal - or lognormal -distributed hyperparameters.
q (float) Quantization step size for quantized hyperparameters.
parameters Nest other parameters inside a root level parameter.

method

Specify the hyperparameter search strategy with the method key. There are three hyperparameter search strategies to choose from: grid, random, and Bayesian search.

Iterate over every combination of hyperparameter values. Grid search makes uninformed decisions on the set of hyperparameter values to use on each iteration. Grid search can be computationally costly.

Grid search executes forever if it is searching within in a continuous search space.

Choose a random, uninformed, set of hyperparameter values on each iteration based on a distribution. Random search runs forever unless you stop the process from the command line, within your python script, or the W&B App UI.

Specify the distribution space with the metric key if you choose random (method: random) search.

In contrast to random and grid search, Bayesian models make informed decisions. Bayesian optimization uses a probabilistic model to decide which values to use through an iterative process of testing values on a surrogate function before evaluating the objective function. Bayesian search works well for small numbers of continuous parameters but scales poorly. For more information about Bayesian search, see the Bayesian Optimization Primer paper.

Bayesian search runs forever unless you stop the process from the command line, within your python script, or the W&B App UI.

Within the parameter key, nest the name of the hyperparameter. Next, specify the distribution key and specify a distribution for the value.

The proceeding tables lists distributions W&B supports.

Value for distribution key Description
constant Constant distribution. Must specify the constant value (value) to use.
categorical Categorical distribution. Must specify all valid values (values) for this hyperparameter.
int_uniform Discrete uniform distribution on integers. Must specify max and min as integers.
uniform Continuous uniform distribution. Must specify max and min as floats.
q_uniform Quantized uniform distribution. Returns round(X / q) * q where X is uniform. q defaults to 1.
log_uniform Log-uniform distribution. Returns a value X between exp(min) and exp(max)such that the natural logarithm is uniformly distributed between min and max.
log_uniform_values Log-uniform distribution. Returns a value X between min and max such that log(X) is uniformly distributed between log(min) and log(max).
q_log_uniform Quantized log uniform. Returns round(X / q) * q where X is log_uniform. q defaults to 1.
q_log_uniform_values Quantized log uniform. Returns round(X / q) * q where X is log_uniform_values. q defaults to 1.
inv_log_uniform Inverse log uniform distribution. Returns X, where log(1/X) is uniformly distributed between min and max.
inv_log_uniform_values Inverse log uniform distribution. Returns X, where log(1/X) is uniformly distributed between log(1/max) and log(1/min).
normal Normal distribution. Return value is normally distributed with mean mu (default 0) and standard deviation sigma (default 1).
q_normal Quantized normal distribution. Returns round(X / q) * q where X is normal. Q defaults to 1.
log_normal Log normal distribution. Returns a value X such that the natural logarithm log(X) is normally distributed with mean mu (default 0) and standard deviation sigma (default 1).
q_log_normal Quantized log normal distribution. Returns round(X / q) * q where X is log_normal. q defaults to 1.

early_terminate

Use early termination (early_terminate) to stop poorly performing runs. If early termination occurs, W&B stops the current run before it creates a new run with a new set of hyperparameter values.

Stopping algorithm

Hyperband hyperparameter optimization evaluates if a program should stop or if it should to continue at one or more pre-set iteration counts, called brackets.

When a W&B run reaches a bracket, the sweep compares that run’s metric to all previously reported metric values. The sweep terminates the run if the run’s metric value is too high (when the goal is minimization) or if the run’s metric is too low (when the goal is maximization).

Brackets are based on the number of logged iterations. The number of brackets corresponds to the number of times you log the metric you are optimizing. The iterations can correspond to steps, epochs, or something in between. The numerical value of the step counter is not used in bracket calculations.

Key Description
min_iter Specify the iteration for the first bracket
max_iter Specify the maximum number of iterations.
s Specify the total number of brackets (required for max_iter)
eta Specify the bracket multiplier schedule (default: 3).
strict Enable ‘strict’ mode that prunes runs aggressively, more closely following the original Hyperband paper. Defaults to false.

command

Modify the format and contents with nested values within the command key. You can directly include fixed components such as filenames.

W&B supports the following macros for variable components of the command:

Command macro Description
${env} /usr/bin/env on Unix systems, omitted on Windows.
${interpreter} Expands to python.
${program} Training script filename specified by the sweep configuration program key.
${args} Hyperparameters and their values in the form --param1=value1 --param2=value2.
${args_no_boolean_flags} Hyperparameters and their values in the form --param1=value1 except boolean parameters are in the form --boolean_flag_param when True and omitted when False.
${args_no_hyphens} Hyperparameters and their values in the form param1=value1 param2=value2.
${args_json} Hyperparameters and their values encoded as JSON.
${args_json_file} The path to a file containing the hyperparameters and their values encoded as JSON.
${envvar} A way to pass environment variables. ${envvar:MYENVVAR} __ expands to the value of MYENVVAR environment variable. __

2.4 - Initialize a sweep

Initialize a W&B Sweep

W&B uses a Sweep Controller to manage sweeps on the cloud (standard), locally (local) across one or more machines. After a run completes, the sweep controller will issue a new set of instructions describing a new run to execute. These instructions are picked up by agents who actually perform the runs. In a typical W&B Sweep, the controller lives on the W&B server. Agents live on your machines.

The following code snippets demonstrate how to initialize sweeps with the CLI and within a Jupyter Notebook or Python script.

Use the W&B SDK to initialize a sweep. Pass the sweep configuration dictionary to the sweep parameter. Optionally provide the name of the project for the project parameter (project) where you want the output of the W&B Run to be stored. If the project is not specified, the run is put in an “Uncategorized” project.

import wandb

# Example sweep configuration
sweep_configuration = {
    "method": "random",
    "name": "sweep",
    "metric": {"goal": "maximize", "name": "val_acc"},
    "parameters": {
        "batch_size": {"values": [16, 32, 64]},
        "epochs": {"values": [5, 10, 15]},
        "lr": {"max": 0.1, "min": 0.0001},
    },
}

sweep_id = wandb.sweep(sweep=sweep_configuration, project="project-name")

The wandb.sweep function returns the sweep ID. The sweep ID includes the entity name and the project name. Make a note of the sweep ID.

Use the W&B CLI to initialize a sweep. Provide the name of your configuration file. Optionally provide the name of the project for the project flag. If the project is not specified, the W&B Run is put in an “Uncategorized” project.

Use the wandb sweep command to initialize a sweep. The proceeding code example initializes a sweep for a sweeps_demo project and uses a config.yaml file for the configuration.

wandb sweep --project sweeps_demo config.yaml

This command will print out a sweep ID. The sweep ID includes the entity name and the project name. Make a note of the sweep ID.

2.5 - Start or stop a sweep agent

Start or stop a W&B Sweep Agent on one or more machines.

Start a W&B Sweep on one or more agents on one or more machines. W&B Sweep agents query the W&B server you launched when you initialized a W&B Sweep (wandb sweep) for hyperparameters and use them to run model training.

To start a W&B Sweep agent, provide the W&B Sweep ID that was returned when you initialized a W&B Sweep. The W&B Sweep ID has the form:

entity/project/sweep_ID

Where:

  • entity: Your W&B username or team name.
  • project: The name of the project where you want the output of the W&B Run to be stored. If the project is not specified, the run is put in an “Uncategorized” project.
  • sweep_ID: The pseudo random, unique ID generated by W&B.

Provide the name of the function the W&B Sweep will execute if you start a W&B Sweep agent within a Jupyter Notebook or Python script.

The proceeding code snippets demonstrate how to start an agent with W&B. We assume you already have a configuration file and you have already initialized a W&B Sweep. For more information about how to define a configuration file, see Define sweep configuration.

Use the wandb agent command to start a sweep. Provide the sweep ID that was returned when you initialized the sweep. Copy and paste the code snippet below and replace sweep_id with your sweep ID:

wandb agent sweep_id

Use the W&B Python SDK library to start a sweep. Provide the sweep ID that was returned when you initialized the sweep. In addition, provide the name of the function the sweep will execute.

wandb.agent(sweep_id=sweep_id, function=function_name)

Stop W&B agent

Optionally specify the number of W&B Runs a Sweep agent should try. The following code snippets demonstrate how to set a maximum number of W&B Runs with the CLI and within a Jupyter Notebook, Python script.

First, initialize your sweep. For more information, see Initialize sweeps.

sweep_id = wandb.sweep(sweep_config)

Next, start the sweep job. Provide the sweep ID generated from sweep initiation. Pass an integer value to the count parameter to set the maximum number of runs to try.

sweep_id, count = "dtzl1o7u", 10
wandb.agent(sweep_id, count=count)

First, initialize your sweep with the wandb sweep command. For more information, see Initialize sweeps.

wandb sweep config.yaml

Pass an integer value to the count flag to set the maximum number of runs to try.

NUM=10
SWEEPID="dtzl1o7u"
wandb agent --count $NUM $SWEEPID

2.6 - Parallelize agents

Parallelize W&B Sweep agents on multi-core or multi-GPU machine.

Parallelize your W&B Sweep agents on a multi-core or multi-GPU machine. Before you get started, ensure you have initialized your W&B Sweep. For more information on how to initialize a W&B Sweep, see Initialize sweeps.

Parallelize on a multi-CPU machine

Depending on your use case, explore the proceeding tabs to learn how to parallelize W&B Sweep agents using the CLI or within a Jupyter Notebook.

Use the wandb agent command to parallelize your W&B Sweep agent across multiple CPUs with the terminal. Provide the sweep ID that was returned when you initialized the sweep.

  1. Open more than one terminal window on your local machine.
  2. Copy and paste the code snippet below and replace sweep_id with your sweep ID:
wandb agent sweep_id

Use the W&B Python SDK library to parallelize your W&B Sweep agent across multiple CPUs within Jupyter Notebooks. Ensure you have the sweep ID that was returned when you initialized the sweep. In addition, provide the name of the function the sweep will execute for the function parameter:

  1. Open more than one Jupyter Notebook.
  2. Copy and past the W&B Sweep ID on multiple Jupyter Notebooks to parallelize a W&B Sweep. For example, you can paste the following code snippet on multiple jupyter notebooks to paralleliz your sweep if you have the sweep ID stored in a variable called sweep_id and the name of the function is function_name:
wandb.agent(sweep_id=sweep_id, function=function_name)

Parallelize on a multi-GPU machine

Follow the procedure outlined to parallelize your W&B Sweep agent across multiple GPUs with a terminal using CUDA Toolkit:

  1. Open more than one terminal window on your local machine.
  2. Specify the GPU instance to use with CUDA_VISIBLE_DEVICES when you start a W&B Sweep job (wandb agent). Assign CUDA_VISIBLE_DEVICES an integer value corresponding to the GPU instance to use.

For example, suppose you have two NVIDIA GPUs on your local machine. Open a terminal window and set CUDA_VISIBLE_DEVICES to 0 (CUDA_VISIBLE_DEVICES=0). Replace sweep_ID in the proceeding example with the W&B Sweep ID that is returned when you initialized a W&B Sweep:

Terminal 1

CUDA_VISIBLE_DEVICES=0 wandb agent sweep_ID

Open a second terminal window. Set CUDA_VISIBLE_DEVICES to 1 (CUDA_VISIBLE_DEVICES=1). Paste the same W&B Sweep ID for the sweep_ID mentioned in the proceeding code snippet:

Terminal 2

CUDA_VISIBLE_DEVICES=1 wandb agent sweep_ID

2.7 - Visualize sweep results

Visualize the results of your W&B Sweeps with the W&B App UI.

Visualize the results of your W&B Sweeps with the W&B App UI. Navigate to the W&B App UI at https://wandb.ai/home. Choose the project that you specified when you initialized a W&B Sweep. You will be redirected to your project workspace. Select the Sweep icon on the left panel (broom icon). From the Sweep UI, select the name of your Sweep from the list.

By default, W&B will automatically create a parallel coordinates plot, a parameter importance plot, and a scatter plot when you start a W&B Sweep job.

Animation that shows how to navigate to the Sweep UI interface and view autogenerated plots.

Parallel coordinates charts summarize the relationship between large numbers of hyperparameters and model metrics at a glance. For more information on parallel coordinates plots, see Parallel coordinates.

Example parallel coordinates plot.

The scatter plot(left) compares the W&B Runs that were generated during the Sweep. For more information about scatter plots, see Scatter Plots.

The parameter importance plot(right) lists the hyperparameters that were the best predictors of, and highly correlated to desirable values of your metrics. For more information parameter importance plots, see Parameter Importance.

Example scatter plot (left) and parameter importance plot (right).

You can alter the dependent and independent values (x and y axis) that are automatically used. Within each panel there is a pencil icon called Edit panel. Choose Edit panel. A model will appear. Within the modal, you can alter the behavior of the graph.

For more information on all default W&B visualization options, see Panels. See the Data Visualization docs for information on how to create plots from W&B Runs that are not part of a W&B Sweep.

2.8 - Manage sweeps with the CLI

Pause, resume, and cancel a W&B Sweep with the CLI.

Pause, resume, and cancel a W&B Sweep with the CLI. Pausing a W&B Sweep tells the W&B agent that new W&B Runs should not be executed until the Sweep is resumed. Resuming a Sweep tells the agent to continue executing new W&B Runs. Stopping a W&B Sweep tells the W&B Sweep agent to stop creating or executing new W&B Runs. Cancelling a W&B Sweep tells the Sweep agent to kill currently executing W&B Runs and stop executing new Runs.

In each case, provide the W&B Sweep ID that was generated when you initialized a W&B Sweep. Optionally open a new terminal window to execute the proceeding commands. A new terminal window makes it easier to execute a command if a W&B Sweep is printing output statements to your current terminal window.

Use the following guidance to pause, resume, and cancel sweeps.

Pause sweeps

Pause a W&B Sweep so it temporarily stops executing new W&B Runs. Use the wandb sweep --pause command to pause a W&B Sweep. Provide the W&B Sweep ID that you want to pause.

wandb sweep --pause entity/project/sweep_ID

Resume sweeps

Resume a paused W&B Sweep with the wandb sweep --resume command. Provide the W&B Sweep ID that you want to resume:

wandb sweep --resume entity/project/sweep_ID

Stop sweeps

Finish a W&B sweep to stop executing newW&B Runs and let currently executing Runs finish.

wandb sweep --stop entity/project/sweep_ID

Cancel sweeps

Cancel a sweep to kill all running runs and stop running new runs. Use the wandb sweep --cancel command to cancel a W&B Sweep. Provide the W&B Sweep ID that you want to cancel.

wandb sweep --cancel entity/project/sweep_ID

For a full list of CLI command options, see the wandb sweep CLI Reference Guide.

Pause, resume, stop, and cancel a sweep across multiple agents

Pause, resume, stop, or cancel a W&B Sweep across multiple agents from a single terminal. For example, suppose you have a multi-core machine. After you initialize a W&B Sweep, you open new terminal windows and copy the Sweep ID to each new terminal.

Within any terminal, use the wandb sweep CLI command to pause, resume, stop, or cancel a W&B Sweep. For example, the proceeding code snippet demonstrates how to pause a W&B Sweep across multiple agents with the CLI:

wandb sweep --pause entity/project/sweep_ID

Specify the --resume flag along with the Sweep ID to resume the Sweep across your agents:

wandb sweep --resume entity/project/sweep_ID

For more information on how to parallelize W&B agents, see Parallelize agents.

2.9 - Learn more about sweeps

Collection of useful sources for Sweeps.

Academic papers

Li, Lisha, et al. “Hyperband: A novel bandit-based approach to hyperparameter optimization.The Journal of Machine Learning Research 18.1 (2017): 6765-6816.

Sweep Experiments

The following W&B Reports demonstrate examples of projects that explore hyperparameter optimization with W&B Sweeps.

selfm-anaged

The following how-to-guide demonstrates how to solve real-world problems with W&B:

Sweep GitHub repository

W&B advocates open source and welcome contributions from the community. Find the GitHub repository at https://github.com/wandb/sweeps. For information on how to contribute to the W&B open source repo, see the W&B GitHub Contribution guidelines.

2.10 - Manage algorithms locally

Search and stop algorithms locally instead of using the W&B cloud-hosted service.

The hyper-parameter controller is hosted by Weights & Biased as a cloud service by default. W&B agents communicate with the controller to determine the next set of parameters to use for training. The controller is also responsible for running early stopping algorithms to determine which runs can be stopped.

The local controller feature allows the user to commence search and stop algorithms locally. The local controller gives the user the ability to inspect and instrument the code in order to debug issues as well as develop new features which can be incorporated into the cloud service.

Before you get start, you must install the W&B SDK(wandb). Type the following code snippet into your command line:

pip install wandb sweeps 

The following examples assume you already have a configuration file and a training loop defined in a python script or Jupyter Notebook. For more information about how to define a configuration file, see Define sweep configuration.

Run the local controller from the command line

Initialize a sweep similarly to how you normally would when you use hyper-parameter controllers hosted by W&B as a cloud service. Specify the controller flag (controller) to indicate you want to use the local controller for W&B sweep jobs:

wandb sweep --controller config.yaml

Alternatively, you can separate initializing a sweep and specifying that you want to use a local controller into two steps.

To separate the steps, first add the following key-value to your sweep’s YAML configuration file:

controller:
  type: local

Next, initialize the sweep:

wandb sweep config.yaml

After you initialized the sweep, start a controller with wandb controller:

# wandb sweep command will print a sweep_id
wandb controller {user}/{entity}/{sweep_id}

Once you have specified you want to use a local controller, start one or more Sweep agents to execute the sweep. Start a W&B Sweep similar to how you normally would. See Start sweep agents, for more information.

wandb sweep sweep_ID

Run a local controller with W&B Python SDK

The following code snippets demonstrate how to specify and use a local controller with the W&B Python SDK.

The simplest way to use a controller with the Python SDK is to pass the sweep ID to the wandb.controller method. Next, use the return objects run method to start the sweep job:

sweep = wandb.controller(sweep_id)
sweep.run()

If you want more control of the controller loop:

import wandb

sweep = wandb.controller(sweep_id)
while not sweep.done():
    sweep.print_status()
    sweep.step()
    time.sleep(5)

Or even more control over the parameters served:

import wandb

sweep = wandb.controller(sweep_id)
while not sweep.done():
    params = sweep.search()
    sweep.schedule(params)
    sweep.print_status()

If you want to specify your sweep entirely with code you can do something like this:

import wandb

sweep = wandb.controller()
sweep.configure_search("grid")
sweep.configure_program("train-dummy.py")
sweep.configure_controller(type="local")
sweep.configure_parameter("param1", value=3)
sweep.create()
sweep.run()

2.11 - Sweeps troubleshooting

Troubleshoot common W&B Sweep issues.

Troubleshoot common error messages with the guidance suggested.

CommError, Run does not exist and ERROR Error uploading

Your W&B Run ID might be defined if these two error messages are both returned. As an example, you might have a similar code snippet defined somewhere in your Jupyter Notebooks or Python script:

wandb.init(id="some-string")

You can not set a Run ID for W&B Sweeps because W&B automatically generates random, unique IDs for Runs created by W&B Sweeps.

W&B Run IDs need to be unique within a project.

We recommend you pass a name to the name parameter when you initialized W&B, if you want to set a custom name that will appear on tables and graphs. For example:

wandb.init(name="a helpful readable run name")

Cuda out of memory

Refactor your code to use process-based executions if you see this error message. More specifically, rewrite your code to a Python script. In addition, call the W&B Sweep Agent from the CLI, instead of the W&B Python SDK.

As an example, suppose you rewrite your code to a Python script called train.py. Add the name of the training script (train.py) to your YAML Sweep configuration file (config.yaml in this example):

program: train.py
method: bayes
metric:
  name: validation_loss
  goal: maximize
parameters:
  learning_rate:
    min: 0.0001
    max: 0.1
  optimizer:
    values: ["adam", "sgd"]

Next, add the following to your train.py Python script:

if _name_ == "_main_":
    train()

Navigate to your CLI and initialize a W&B Sweep with wandb sweep:

wandb sweep config.yaml

Make a note of the W&B Sweep ID that is returned. Next, start the Sweep job with wandb agent with the CLI instead of the Python SDK (wandb.agent). Replace sweep_ID in the code snippet below with the Sweep ID that was returned in the previous step:

wandb agent sweep_ID

anaconda 400 error

The following error usually occurs when you do not log the metric that you are optimizing:

wandb: ERROR Error while calling W&B API: anaconda 400 error: 
{"code": 400, "message": "TypeError: bad operand type for unary -: 'NoneType'"}

Within your YAML file or nested dictionary you specify a key named “metric” to optimize. Ensure that you log (wandb.log) this metric. In addition, ensure you use the exact metric name that you defined the sweep to optimize within your Python script or Jupyter Notebook. For more information about configuration files, see Define sweep configuration.

2.12 - Sweeps UI

Describes the different components of the Sweeps UI.

The state (State), creation time (Created), the entity that started the sweep (Creator), the number of runs completed (Run count), and the time it took to compute the sweep (Compute time) are displayed in the Sweeps UI. The expected number of runs a sweep will create (Est. Runs) is provided when you do a grid search over a discrete search space. You can also click on a sweep to pause, resume, stop, or kill the sweep from the interface.

2.13 - Tutorial: Create sweep job from project

Tutorial on how to create sweep jobs from a pre-existing W&B project.

This tutorial explains how to create sweep jobs from a pre-existing W&B project. We will use the Fashion MNIST dataset to train a PyTorch convolutional neural network how to classify images. The required code an dataset is located in the W&B repo: https://github.com/wandb/examples/tree/master/examples/pytorch/pytorch-cnn-fashion

Explore the results in this W&B Dashboard.

1. Create a project

First, create a baseline. Download the PyTorch MNIST dataset example model from W&B examples GitHub repository. Next, train the model. The training script is within the examples/pytorch/pytorch-cnn-fashion directory.

  1. Clone this repo git clone https://github.com/wandb/examples.git
  2. Open this example cd examples/pytorch/pytorch-cnn-fashion
  3. Run a run manually python train.py

Optionally explore the example appear in the W&B App UI dashboard.

View an example project page →

2. Create a sweep

From your project page, open the Sweep tab in the sidebar and select Create Sweep.

The auto-generated configuration guesses values to sweep over based on the runs you have completed. Edit the configuration to specify what ranges of hyperparameters you want to try. When you launch the sweep, it starts a new process on the hosted W&B sweep server. This centralized service coordinates the agents— the machines that are running the training jobs.

3. Launch agents

Next, launch an agent locally. You can launch up to 20 agents on different machines in parallel if you want to distribute the work and finish the sweep job more quickly. The agent will print out the set of parameters it’s trying next.

Now you’re running a sweep. The following image demonstrates what the dashboard looks like as the example sweep job is running. View an example project page →

Seed a new sweep with existing runs

Launch a new sweep using existing runs that you’ve previously logged.

  1. Open your project table.
  2. Select the runs you want to use with checkboxes on the left side of the table.
  3. Click the dropdown to create a new sweep.

Your sweep will now be set up on our server. All you need to do is launch one or more agents to start running runs.

3 - Registry

W&B Registry is a curated central repository of artifact versions within your organization. Users who have permission within your organization can download, share, and collaboratively manage the lifecycle of all artifacts, regardless of the team that user belongs to.

You can use the Registry to track artifact versions, audit the history of an artifact’s usage and changes, ensure governance and compliance of your artifacts, and automate downstream processes such as model CI/CD.

In summary, use W&B Registry to:

The preceding image shows the Registry App with “Model” and “Dataset” core registries along with custom registries.

Learn the basics

Each organization initially contains two registries that you can use to organize your model and dataset artifacts called Models and Datasets, respectively. You can create additional registries to organize other artifact types based on your organization’s needs.

Each registry consists of one or more collections. Each collection represents a distinct task or use case.

To add an artifact to a registry, you first log a specific artifact version to W&B. Each time you log an artifact, W&B automatically assigns a version to that artifact. Artifact versions use 0 indexing, so the first version is v0, the second version is v1, and so on.

Once you log an artifact to W&B, you can then link that specific artifact version to a collection in the registry.

As an example, the proceeding code example shows how to log and link a fake model artifact called “my_model.txt” to a collection named “first-collection” in the core Model registry. More specifically, the code accomplishes the following:

  1. Initialize a W&B run.
  2. Log the artifact to W&B.
  3. Specify the name of the collection and registry you want to link your artifact version to.
  4. Link the artifact to the collection.

Copy and paste the proceeding code snippet into a Python script and run it. Ensure that you have W&B Python SDK version 0.18.6 or greater.

import wandb
import random

# Initialize a W&B run to track the artifact
run = wandb.init(project="registry_quickstart") 

# Create a simulated model file so that you can log it
with open("my_model.txt", "w") as f:
   f.write("Model: " + str(random.random()))

# Log the artifact to W&B
logged_artifact = run.log_artifact(
    artifact_or_path="./my_model.txt", 
    name="gemma-finetuned", 
    type="model" # Specifies artifact type
)

# Specify the name of the collection and registry
# you want to publish the artifact to
COLLECTION_NAME = "first-collection"
REGISTRY_NAME = "model"

# Link the artifact to the registry
run.link_artifact(
    artifact=logged_artifact, 
    target_path=f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}"
)

W&B automatically creates a collection for you if the collection you specify in the returned run object’s link_artifact(target_path = "") method does not exist within the registry you specify.

Navigate to the Registry App to view artifact versions that you and other members of your organization publish. To do so, first navigate to W&B. Select Registry in the left sidebar below Applications. Select the “Model” registry. Within the registry, you should see the “first-collection” collection with your linked artifact version.

Once you link an artifact version to a collection within a registry, members of your organization can view, download, and manage your artifact versions, create downstream automations, and more if they have the proper permissions.

Enable W&B Registry

Based on your deployment type, satisfy the following conditions to enable W&B Registry:

Deployment type How to enable
Multi-tenant Cloud No action required. W&B Registry is available on the W&B App.
Dedicated Cloud Contact your account team. The Solutions Architect (SA) Team enables W&B Registry within your instance’s operator console. Ensure your instance is on server release version 0.59.2 or newer.
Self-Managed Enable the environment variable called ENABLE_REGISTRY_UI. To learn more about enabling environment variables in server, visit these docs. In self-managed instances, your infrastructure administrator should enable this environment variable and set it to true. Ensure your instance is on server release version 0.59.2 or newer.

Resources to get started

Depending on your use case, explore the following resources to get started with the W&B Registry:

  • Check out the tutorial video:
  • Take the W&B Model CI/CD course and learn how to:
    • Use W&B Registry to manage and version your artifacts, track lineage, and promote models through different lifecycle stages.
    • Automate your model management workflows using webhooks.
    • Integrate the registry with external ML systems and tools for model evaluation, monitoring, and deployment.

Migrate from the legacy Model Registry to W&B Registry

The legacy Model Registry is scheduled for deprecation with the exact date not yet decided. Before deprecating the legacy Model Registry, W&B will migrate the contents of the legacy Model Registry to the W&B Registry.

See Migrating from legacy Model Registry for more information about the migration process from the legacy Model Registry to W&B Registry.

Until the migration occurs, W&B supports both the legacy Model Registry and the new Registry.

Reach out to support@wandb.com with any questions or to speak to the W&B Product Team about any concerns about the migration.

3.1 - Registry types

W&B supports two types of registries: Core registries and Custom registries.

Core registry

A core registry is a template for specific use cases: Models and Datasets.

By default, the Models registry is configured to accept "model" artifact types and the Dataset registry is configured to accept "dataset" artifact types. An admin can add additional accepted artifact types.

The preceding image shows the Models and the Dataset core registry along with a custom registry called Fine_Tuned_Models in the W&B Registry App UI.

A core registry has organization visibility. A registry admin can not change the visibility of a core registry.

Custom registry

Custom registries are not restricted to "model" artifact types or "dataset" artifact types.

You can create a custom registry for each step in your machine learning pipeline, from initial data collection to final model deployment.

For example, you might create a registry called “Benchmark_Datasets” for organizing curated datasets to evaluate the performance of trained models. Within this registry, you might have a collection called “User_Query_Insurance_Answer_Test_Data” that contains a set of user questions and corresponding expert-validated answers that the model has never seen during training.

A custom registry can have either organization or restricted visibility. A registry admin can change the visibility of a custom registry from organization to restricted. However, the registry admin can not change a custom registry’s visibility from restricted to organizational visibility.

For information on how to create a custom registry, see Create a custom registry.

Summary

The proceeding table summarizes the differences between core and custom registries:

Core Custom
Visibility Organizational visibility only. Visibility can not be altered. Either organization or restricted. Visibility can be altered from organization to restricted visibility.
Metadata Preconfigured and not editable by users. Users can edit.
Artifact types Preconfigured and accepted artifact types cannot be removed. Users can add additional accepted artifact types. Admin can define accepted types.
Customization Can add additional types to the existing list. Edit registry name, description, visibility, and accepted artifact types.

3.2 - Create a custom registry

Create a custom registry for each step of your ML workflow.

Custom registries are particularly useful for organizing project-specific requirements that differ from the default, core registry.

The following procedure describes how to interactively create a registry:

  1. Navigate to the Registry App in the W&B App UI.
  2. Within Custom registry, click on the Create registry button.
  3. Provide a name for your registry in the Name field.
  4. Optionally provide a description about the registry.
  5. Select who can view the registry from the Registry visibility dropdown. See Registry visibility types for more information on registry visibility options.
  6. Select either All types or Specify types from the Accepted artifacts type dropdown.
  7. (If you select Specify types) Add one or more artifact types that your registry accepts.
8. Click on the **Create registry** button.

For example, the preceding image shows a custom registry called “Fine_Tuned_Models” that a user is about to create. The registry is set to Restricted which means that only members that are manually added to the “Fine_Tuned_Models” registry will have access to this registry.

3.3 - Configure registry access

Registry admins can limit who can access a registry by navigating to a registry’s settings and assigning a user’s role to Admin, Member, or Viewer. Users can have different roles in different registries. For example, a user can have a view role in “Registry A” and a member role in the “Registry B”.

Registry roles permissions

A user within an organization can have different roles, and therefore permissions, for each registry in their organization.

The proceeding table lists the different roles a user can have and their permissions:

Permission Permission Group Viewer Member Admin Owner
View a collection’s details Read X X X X
View a linked artifact’s details Read X X X X
Usage: Consume an artifact in a registry with use_artifact Read X X X X
Download a linked artifact Read X X X X
Download files from an artifact’s file viewer Read X X X X
Search a registry Read X X X X
View a registry’s settings and user list Read X X X X
Create a new automation for a collection Create X X X
Turn on Slack notifications for new version being added Create X X X
Create a new collection Create X X X
Create a new custom registry Create X X X
Edit collection card (description) Update X X X
Edit linked artifact description Update X X X
Add or delete a collection’s tag Update X X X
Add or delete an alias from a linked artifact Update X X X
Link a new artifact Update X X X
Edit allowed types list for a registry Update X X X
Edit custom registry name Update X X X
Delete a collection Delete X X X
Delete an automation Delete X X X
Unlink an artifact from a registry Delete X X X
Edit accepted artifact types for a registry Admin X X
Change registry visibility (Organization or Restricted) Admin X X
Add users to a registry Admin X X
Assign or change a user’s role in a registry Admin X X

Configure user roles in a registry

  1. Navigate to the Registry App in the W&B App UI.
  2. Select the registry you want to configure.
  3. Click on the gear icon on the upper right hand corner.
  4. Scroll to the Registry members and roles section.
  5. Within the Member field, search for the user you want to edit permissions for.
  6. Click on the user’s role within the Registry role column.
  7. From the dropdown, select the role you want to assign to the user.

Remove a user from a registry

  1. Navigate to the Registry App in the W&B App UI.
  2. Select a core or custom registry.
  3. Click on the gear icon on the upper right hand corner.
  4. Scroll to the Registry members and roles section and type in the username of the member you want to remove.
  5. Click the Delete button.

Registry visibility types

There are two registry visibility types: restricted or organization visibility. The following table describes who has access to the registry by default:

Visibility Description Default role Example
Organization Everyone in the org can access the registry. By default, organization administrators are an admin for the registry. All other users are a viewer in the registry by default. Core registry
Restricted Only invited org members can access the registry. The user who created the restricted registry is the only user in the registry by default, and is the organization’s owner. Custom registry or core registry

Restrict visibility to a registry

Restrict who can view and access a custom registry. You can restrict visibility to a registry when you create a custom registry or after you create a custom registry. A custom registry can have either restricted or organization visibility. For more information on registry visibilities, see Registry visibility types.

The following steps describe how to restrict the visibility of a custom registry that already exists:

  1. Navigate to the Registry App in the W&B App UI.
  2. Select a registry.
  3. Click on the gear icon on the upper right hand corner.
  4. From the Registry visibility dropdown, select the desired registry visibility.

Continue if you select Restricted visibility:

  1. Add members of your organization that you want to have access to this registry. Scroll to the Registry members and roles section and click on the Add member button.
  2. Within the Member field, add the email or username of the member you want to add.
  3. Click Add new member.

3.4 - Create a collection

A collection is a set of linked artifact versions within a registry. Each collection represents a distinct task or use case.

For example, within the core Dataset registry you might have multiple collections. Each collection contains a different dataset such as MNIST, CIFAR-10, or ImageNet.

As another example, you might have a registry called “chatbot” that contains a collection for model artifacts, another collection for dataset artifacts, and another collection for fine-tuned model artifacts.

How you organize a registry and their collections is up to you.

Collection types

Each collection accepts one, and only one, type of artifact. The type you specify restricts what sort of artifacts you, and other members of your organization, can link to that collection.

For example, suppose you create a collection that accepts “dataset” artifact types. This means that you can only link future artifact versions that have the type “dataset” to this collection. Similarly, you can only link artifacts of type “model” to a collection that accepts only model artifact types.

When you create a collection, you can select from a list of predefined artifact types. The artifact types available to you depend on the registry that the collection belongs to. .

Before you link an artifact to a collection or create a new collection, investigate the types of artifacts that collection accepts.

Check the types of artifact that a collection accepts

Before you link to a collection, inspect the artifact type that the collection accepts. You can inspect the artifact types that collection accepts programmatically with the W&B Python SDK or interactively with the W&B App

You can find the accepted artifact types on the registry card on the homepage or within a registry’s settings page.

For both methods, first navigate to your W&B Registry App.

Within the homepage of the Registry App, you can view the accepted artifact types by scrolling to the registry card of that registry. The gray horizontal ovals within the registry card lists the artifact types that registry accepts.

For example, the preceding image shows multiple registry cards on the Registry App homepage. Within the Model registry card, you can see two artifact types: model and model-new.

To view accepted artifact types within a registry’s settings page:

  1. Click on the registry card you want to view the settings for.
  2. Click on the gear icon in the upper right corner.
  3. Scroll to the Accepted artifact types field.

Programmatically view the artifact types that a registry accepts with the W&B Python SDK:

import wandb

registry_name = "<registry_name>"
artifact_types = wandb.Api().project(name=f"wandb-registry-{registry_name}").artifact_types()
print(artifact_type.name for artifact_type in artifact_types)

Once you know what type of artifact a collection accepts, you can create a collection.

Create a collection

Interactively or programmatically create a collection within a registry. You can not change the type of artifact that a collection accepts after you create it.

Programmatically create a collection

Use the wandb.init.link_artifact() method to link an artifact to a collection. Specify both the collection and the registry to the target_path field as a path that takes the form of:

f"wandb-registry-{registry_name}/{collection_name}"

Where registry_name is the name of the registry and collection_name is the name of the collection. Ensure to append the prefix wandb-registry- to the registry name.

The proceeding code snippet shows how to programmatically create a collection. Ensure to replace other the values enclosed in <> with your own:

import wandb

# Initialize a run
run = wandb.init(entity = "<team_entity>", project = "<project>")

# Create an artifact object
artifact = wandb.Artifact(
  name = "<artifact_name>",
  type = "<artifact_type>"
  )

registry_name = "<registry_name>"
collection_name = "<collection_name>"
target_path = f"wandb-registry-{registry_name}/{collection_name}"

# Link the artifact to a collection
run.link_artifact(artifact = artifact, target_path = target_path)

run.finish()

Interactively create a collection

The following steps describe how to create a collection within a registry using the W&B Registry App UI:

  1. Navigate to the Registry App in the W&B App UI.
  2. Select a registry.
  3. Click on the Create collection button in the upper right hand corner.
  4. Provide a name for your collection in the Name field.
  5. Select a type from the Type dropdown. Or, if the registry enables custom artifact types, provide one or more artifact types that this collection accepts.
  6. Optionally provide a description of your collection in the Description field.
  7. Optionally add one or more tags in the Tags field.
  8. Click Link version.
  9. From the Project dropdown, select the project where your artifact is stored.
  10. From the Artifact collection dropdown, select your artifact.
  11. From the Version dropdown, select the artifact version you want to link to your collection.
  12. Click on the Create collection button.

3.5 - Link an artifact version to a registry

Link artifact versions to a collection to make them available to other members in your organization.

When you link an artifact to a registry, this “publishes” that artifact to that registry. Any user that has access to that registry can access the linked artifact versions in the collection.

In other words, linking an artifact to a registry collection brings that artifact version from a private, project-level scope, to a shared organization level scope.

Link an artifact version to a collection interactively or programmatically.

Based on your use case, follow the instructions described in the tabs below to link an artifact version.

Programmatically link an artifact version to a collection with wandb.init.Run.link_artifact().

Use the target_path parameter to specify the collection and registry you want to link the artifact version to. The target path consists of the prefix “wandb-registry”, the name of the registry, and the name of the collection separated by a forward slashes:

wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}

Copy and paste the code snippet below to link an artifact version to a collection within an existing registry. Replace values enclosed in <> with your own:

import wandb

# Initialize a run
run = wandb.init(
  entity = "<team_entity>",
  project = "<project_name>"
)

# Create an artifact object
# The type parameter specifies both the type of the 
# artifact object and the collection type
artifact = wandb.Artifact(name = "<name>", type = "<type>")

# Add the file to the artifact object. 
# Specify the path to the file on your local machine.
artifact.add_file(local_path = "<local_path_to_artifact>")

# Specify the collection and registry to link the artifact to
REGISTRY_NAME = "<registry_name>"  
COLLECTION_NAME = "<collection_name>"
target_path=f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}"

# Link the artifact to the collection
run.link_artifact(artifact = artifact, target_path = target_path)
  1. Navigate to the Registry App.
  2. Hover your mouse next to the name of the collection you want to link an artifact version to.
  3. Select the meatball menu icon (three horizontal dots) next to View details.
  4. From the dropdown, select Link new version.
  5. From the sidebar that appears, select the name of a team from the Team dropdown.
  6. From the Project dropdown, select the name of the project that contains your artifact.
  7. From the Artifact dropdown, select the name of the artifact.
  8. From the Version dropdown, select the artifact version you want to link to the collection.
  1. Navigate to your project’s artifact browser on the W&B App at: https://wandb.ai/<entity>/<project>/artifacts
  2. Select the Artifacts icon on the left sidebar.
  3. Click on the artifact version you want to link to your registry.
  4. Within the Version overview section, click the Link to registry button.
  5. From the modal that appears on the right of the screen, select an artifact from the Select a register model menu dropdown.
  6. Click Next step.
  7. (Optional) Select an alias from the Aliases dropdown.
  8. Click Link to registry.

View a linked artifact’s metadata, version data, usage, lineage information and more in the Registry App.

View linked artifacts in a registry

View information about linked artifacts such as metadata, lineage, and usage information in the Registry App.

  1. Navigate to the Registry App.
  2. Select the name of the registry that you linked the artifact to.
  3. Select the name of the collection.
  4. From the list of artifact versions, select the version you want to access. Version numbers are incrementally assigned to each linked artifact version starting with v0.

Once you select an artifact version, you can view that version’s metadata, lineage, and usage information.

Make note of the Full Name field within the Version tab. The full name of a linked artifact consists of the registry, collection name, and the alias or index of the artifact version.

wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{INTEGER}

You need the full name of a linked artifact to access the artifact version programmatically.

Troubleshooting

Below are some common things to double check if you are not able to link an artifact.

Logging artifacts from a personal account

Artifacts logged to W&B with a personal entity can not be linked to the registry. Make sure that you log artifacts using a team entity within your organization. Only artifacts logged within an organization’s team can be linked to the organization’s registry.

Find your team entity

W&B uses the name of your team as the team’s entity. For example, if your team is called team-awesome, your team entity is team-awesome.

You can confirm the name of your team by:

  1. Navigate to your team’s W&B profile page.
  2. Copy the site’s URL. It has the form of https://wandb.ai/<team>. Where <team> is the both the name of your team and the team’s entity.

Log from a team entity

  1. Specify the team as the entity when you initialize a run with wandb.init(). If you do not specify the entity when you initialize a run, the run uses your default entity which may or may not be your team entity.
import wandb   

run = wandb.init(
  entity='<team_entity>', 
  project='<project_name>'
  )
  1. Log the artifact to the run either with run.log_artifact or by creating an Artifact object and then adding files to it with :

    artifact = wandb.Artifact(name="<artifact_name>", type="<type>")
    

    For more information on how to log artifacts, see Construct artifacts.

  2. If an artifact is logged to your personal entity, you will need to re-log it to an entity within your organization.

Confirm the path of a registry in the W&B App UI

There are two ways to confirm the path of a registry with the UI: create an empty collection and view the collection details or copy and paste the autogenerated code on the collection’s home page.

Copy and paste autogenerated code

  1. Navigate to the Registry app at https://wandb.ai/registry/.
  2. Click the registry you want to link an artifact to.
  3. At the top of the page, you will see an autogenerated code block.
  4. Copy and paste this into your code, ensure to replace the last part of the path with the name of your collection.

Create an empty collection

  1. Navigate to the Registry app at https://wandb.ai/registry/.
  2. Click the registry you want to link an artifact to.
  3. Click on the empty collection. If an empty collection does not exist, create a new collection.
  4. Within the code snippet that appears, identify the target_path field within .link_artifact().
  5. (Optional) Delete the collection.

For example, after completing the steps outlined, you find the code block with the target_path parameter:

target_path = 
      "smle-registries-bug-bash/wandb-registry-Golden Datasets/raw_images"

Breaking this down into its components, you can see what you will need to use to create the path to link your artifact programmatically:

ORG_ENTITY_NAME = "smle-registries-bug-bash"
REGISTRY_NAME = "Golden Datasets"
COLLECTION_NAME = "raw_images"

3.6 - Download an artifact from a registry

Use the W&B Python SDK to download an artifact linked to a registry. To download and use an artifact, you need to know the name of the registry, the name of the collection, and the alias or index of the artifact version you want to download.

Once you know the properties of the artifact, you can construct the path to the linked artifact and download the artifact. Alternatively, you can copy and paste a pre-generated code snippet from the W&B App UI to download an artifact linked to a registry.

Construct path to linked artifact

To download an artifact linked to a registry, you must know the path of that linked artifact. The path consists of the registry name, collection name, and the alias or index of the artifact version you want to access.

Once you have the registry, collection, and alias or index of the artifact version, you can construct the path to the linked artifact using the proceeding string template:

# Artifact name with version index specified
f"wandb-registry-{REGISTRY}/{COLLECTION}:v{INDEX}"

# Artifact name with alias specified
f"wandb-registry-{REGISTRY}/{COLLECTION}:{ALIAS}"

Replace the values within the curly braces {} with the name of the registry, collection, and the alias or index of the artifact version you want to access.

Use the wandb.init.use_artifact method to access the artifact and download its contents once you have the path of the linked artifact. The proceeding code snippet shows how to use and download an artifact linked to the W&B Registry. Ensure to replace values within <> with your own:

import wandb

REGISTRY = '<registry_name>'
COLLECTION = '<collection_name>'
ALIAS = '<artifact_alias>'

run = wandb.init(
   entity = '<team_name>',
   project = '<project_name>'
   )  

artifact_name = f"wandb-registry-{REGISTRY}/{COLLECTION}:{ALIAS}"
# artifact_name = '<artifact_name>' # Copy and paste Full name specified on the Registry App
fetched_artifact = run.use_artifact(artifact_or_name = artifact_name)  
download_path = fetched_artifact.download()  

The .use_artifact() method both creates a run and marks the artifact you download as the input to that run. Marking an artifact as the input to a run enables W&B to track the lineage of that artifact.

If you do not want to create a run, you can use the wandb.Api() object to access the artifact:

import wandb

REGISTRY = "<registry_name>"
COLLECTION = "<collection_name>"
VERSION = "<version>"

api = wandb.Api()
artifact_name = f"wandb-registry-{REGISTRY}/{COLLECTION}:{VERSION}"
artifact = api.artifact(name = artifact_name)
Example: Use and download an artifact linked to the W&B Registry

The proceeding code example shows how a user can download an artifact linked to a collection called phi3-finetuned in the Fine-tuned Models registry. The alias of the artifact version is set to production.

import wandb

TEAM_ENTITY = "product-team-applications"
PROJECT_NAME = "user-stories"

REGISTRY = "Fine-tuned Models"
COLLECTION = "phi3-finetuned"
ALIAS = 'production'

# Initialize a run inside the specified team and project
run = wandb.init(entity=TEAM_ENTITY, project = PROJECT_NAME)

artifact_name = f"wandb-registry-{REGISTRY}/{COLLECTION}:{ALIAS}"

# Access an artifact and mark it as input to your run for lineage tracking
fetched_artifact = run.use_artifact(artifact_or_name = name)  

# Download artifact. Returns path to downloaded contents
downloaded_path = fetched_artifact.download()  

See use_artifact and Artifact.download() in the API Reference guide for more information on possible parameters and return type.

Copy and paste pre-generated code snippet

W&B creates a code snippet that you can copy and paste into your Python script, notebook, or terminal to download an artifact linked to a registry.

  1. Navigate to the Registry App.
  2. Select the name of the registry that contains your artifact.
  3. Select the name of the collection.
  4. From the list of artifact versions, select the version you want to access.
  5. Select the Usage tab.
  6. Copy the code snippet shown in the Usage API section.
  7. Paste the code snippet into your Python script, notebook, or terminal.

3.7 - Organize versions with tags

Use tags to organize collections or artifact versions within collections. You can add, remove, edit tags with the Python SDK or W&B App UI.

Create and add tags to organize your collections or artifact versions within your registry. Add, modify, view, or remove tags to a collection or artifact version with the W&B App UI or the W&B Python SDK.

Add a tag to a collection

Use the W&B App UI or Python SDK to add a tag to a collection:

Use the W&B App UI to add a tag to a collection:

  1. Navigate to the W&B Registry at https://wandb.ai/registry
  2. Click on a registry card
  3. Click View details next to the name of a collection
  4. Within the collection card, click on the plus icon (+) next to the Tags field and type in the name of the tag
  5. Press Enter on your keyboard
import wandb

COLLECTION_TYPE = "<collection_type>"
ORG_NAME = "<org_name>"
REGISTRY_NAME = "<registry_name>"
COLLECTION_NAME = "<collection_name>"

full_name = f"{ORG_NAME}/wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}"

collection = wandb.Api().artifact_collection(
  type_name = COLLECTION_TYPE, 
  name = full_name
  )

collection.tags = ["your-tag"]
collection.save()

Update tags that belong to a collection

Update a tag programmatically by reassigning or by mutating the tags attribute. W&B recommends, and it is good Python practice, that you reassign the tags attribute instead of in-place mutation.

For example, the proceeding code snippet shows common ways to update a list with reassignment. For brevity, we continue the code example from the Add a tag to a collection section:

collection.tags = [*collection.tags, "new-tag", "other-tag"]
collection.tags = collection.tags + ["new-tag", "other-tag"]

collection.tags = set(collection.tags) - set(tags_to_delete)
collection.tags = []  # deletes all tags

The following code snippet shows how you can use in-place mutation to update tags that belong to an artifact version:

collection.tags += ["new-tag", "other-tag"]
collection.tags.append("new-tag")

collection.tags.extend(["new-tag", "other-tag"])
collection.tags[:] = ["new-tag", "other-tag"]
collection.tags.remove("existing-tag")
collection.tags.pop()
collection.tags.clear()

View tags that belong to a collection

Use the W&B App UI to view tags added to a collection:

  1. Navigate to the W&B Registry at https://wandb.ai/registry
  2. Click on a registry card
  3. Click View details next to the name of a collection

If a collection has one or more tags, you can view those tags within the collection card next to the Tags field.

Tags added to a collection also appear next to the name of that collection.

For example, in the proceeding image, a tag called “tag1” was added to the “zoo-dataset-tensors” collection.

Remove a tag from a collection

Use the W&B App UI to remove a tag from a collection:

  1. Navigate to the W&B Registry at https://wandb.ai/registry
  2. Click on a registry card
  3. Click View details next to the name of a collection
  4. Within the collection card, hover your mouse over the name of the tag you want to remove
  5. Click on the cancel button (X icon)

Add a tag to an artifact version

Add a tag to an artifact version linked to a collection with the W&B App UI or with the Python SDK.

  1. Navigate to the W&B Registry at https://wandb.ai/registry
  2. Click on a registry card
  3. Click View details next to the name of the collection you want to add a tag to
  4. Scroll down to Versions
  5. Click View next to an artifact version
  6. Within the Version tab, click on the plus icon (+) next to the Tags field and type in the name of the tag
  7. Press Enter on your keyboard

Fetch the artifact version you want to add or update a tag to. Once you have the artifact version, you can access the artifact object’s tag attribute to add or modify tags to that artifact. Pass in one or more tags as list to the artifacts tag attribute.

Like other artifacts, you can fetch an artifact from W&B without creating a run or you can create a run and fetch the artifact within that run. In either case, ensure to call the artifact object’s save method to update the artifact on the W&B servers.

Copy and paste an appropriate code cells below to add or modify an artifact version’s tag. Replace the values in <> with your own.

The proceeding code snippet shows how to fetch an artifact and add a tag without creating a new run:

import wandb

ARTIFACT_TYPE = "<TYPE>"
ORG_NAME = "<org_name>"
REGISTRY_NAME = "<registry_name>"
COLLECTION_NAME = "<collection_name>"
VERSION = "<artifact_version>"

artifact_name = f"{ORG_NAME}/wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"

artifact = wandb.Api().artifact(name = artifact_name, type = ARTIFACT_TYPE)
artifact.tags = ["tag2"] # Provide one or more tags in a list
artifact.save()

The proceeding code snippet shows how to fetch an artifact and add a tag by creating a new run:

import wandb

ORG_NAME = "<org_name>"
REGISTRY_NAME = "<registry_name>"
COLLECTION_NAME = "<collection_name>"
VERSION = "<artifact_version>"

run = wandb.init(entity = "<entity>", project="<project>")

artifact_name = f"{ORG_NAME}/wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"

artifact = run.use_artifact(artifact_or_name = artifact_name)
artifact.tags = ["tag2"] # Provide one or more tags in a list
artifact.save()

Update tags that belong to an artifact version

Update a tag programmatically by reassigning or by mutating the tags attribute. W&B recommends, and it is good Python practice, that you reassign the tags attribute instead of in-place mutation.

For example, the proceeding code snippet shows common ways to update a list with reassignment. For brevity, we continue the code example from the Add a tag to an artifact version section:

artifact.tags = [*artifact.tags, "new-tag", "other-tag"]
artifact.tags = artifact.tags + ["new-tag", "other-tag"]

artifact.tags = set(artifact.tags) - set(tags_to_delete)
artifact.tags = []  # deletes all tags

The following code snippet shows how you can use in-place mutation to update tags that belong to an artifact version:

artifact.tags += ["new-tag", "other-tag"]
artifact.tags.append("new-tag")

artifact.tags.extend(["new-tag", "other-tag"])
artifact.tags[:] = ["new-tag", "other-tag"]
artifact.tags.remove("existing-tag")
artifact.tags.pop()
artifact.tags.clear()

View tags that belong to an artifact version

View tags that belong to an artifact version that is linked to a registry with the W&B App UI or with the Python SDK.

  1. Navigate to the W&B Registry at https://wandb.ai/registry
  2. Click on a registry card
  3. Click View details next to the name of the collection you want to add a tag to
  4. Scroll down to Versions section

If an artifact version has one or more tags, you can view those tags within the Tags column.

Fetch the artifact version to view its tags. Once you have the artifact version, you can view tags that belong to that artifact by viewing the artifact object’s tag attribute.

Like other artifacts, you can fetch an artifact from W&B without creating a run or you can create a run and fetch the artifact within that run.

Copy and paste an appropriate code cells below to add or modify an artifact version’s tag. Replace the values in <> with your own.

The proceeding code snippet shows how to fetch and view an artifact version’s tags without creating a new run:

import wandb

ARTIFACT_TYPE = "<TYPE>"
ORG_NAME = "<org_name>"
REGISTRY_NAME = "<registry_name>"
COLLECTION_NAME = "<collection_name>"
VERSION = "<artifact_version>"

artifact_name = f"{ORG_NAME}/wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"

artifact = wandb.Api().artifact(name = artifact_name, type = artifact_type)
print(artifact.tags)

The proceeding code snippet shows how to fetch and view artifact version’s tags by creating a new run:

import wandb

ORG_NAME = "<org_name>"
REGISTRY_NAME = "<registry_name>"
COLLECTION_NAME = "<collection_name>"
VERSION = "<artifact_version>"

run = wandb.init(entity = "<entity>", project="<project>")

artifact_name = f"{ORG_NAME}/wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"

artifact = run.use_artifact(artifact_or_name = artifact_name)
print(artifact.tags)

Remove a tag from an artifact version

  1. Navigate to the W&B Registry at https://wandb.ai/registry
  2. Click on a registry card
  3. Click View details next to the name of the collection you want to add a tag to
  4. Scroll down to Versions
  5. Click View next to an artifact version
  6. Within the Version tab, hover your mouse over the name of the tag
  7. Click on the cancel button (X icon)

Search existing tags

Use the W&B App UI to search existing tags in collections and artifact versions:

  1. Navigate to the W&B Registry at https://wandb.ai/registry
  2. Click on a registry card
  3. Within the search bar, type in the name of a tag

Find artifact versions with a specific tag

Use the W&B Python SDK to find artifact versions that have a set of tags:

import wandb

api = wandb.Api()
tagged_artifact_versions = api.artifacts(
    type_name = "<artifact_type>",
    name = "<artifact_name>",
    tags = ["<tag_1>", "<tag_2>"]
)

for artifact_version in tagged_artifact_versions:
    print(artifact_version.tags)

3.8 - Migrate from legacy Model Registry

W&B will transition assets from the legacy W&B Model Registry to the new W&B Registry. This migration will be fully managed and triggered by W&B, requiring no intervention from users. The process is designed to be as seamless as possible, with minimal disruption to existing workflows.

The transition will take place once the new W&B Registry includes all the functionalities currently available in the Model Registry. W&B will attempt to preserve current workflows, codebases, and references.

This guide is a living document and will be updated regularly as more information becomes available. For any questions or support, contact support@wandb.com.

How W&B Registry differs from the legacy Model Registry

W&B Registry introduces a range of new features and enhancements designed to provide a more robust and flexible environment for managing models, datasets, and other artifacts.

Organizational visibility

Artifacts linked to the legacy Model Registry have team level visibility. This means that only members of your team can view your artifacts in the legacy W&B Model Registry. W&B Registry has organization level visibility. This means that members across an organization, with correct permissions, can view artifacts linked to a registry.

Restrict visibility to a registry

Restrict who can view and access a custom registry. You can restrict visibility to a registry when you create a custom registry or after you create a custom registry. In a Restricted registry, only selected members can access the content, maintaining privacy and control. For more information about registry visibility, see Registry visibility types.

Create custom registries

Unlike the legacy Model Registry, W&B Registry is not limited to models or dataset registries. You can create custom registries tailored to specific workflows or project needs, capable of holding any arbitrary object type. This flexibility allows teams to organize and manage artifacts according to their unique requirements. For more information on how to create a custom registry, see Create a custom registry.

Custom access control

Each registry supports detailed access control, where members can be assigned specific roles such as Admin, Member, or Viewer. Admins can manage registry settings, including adding or removing members, setting roles, and configuring visibility. This ensures that teams have the necessary control over who can view, manage, and interact with the artifacts in their registries.

Terminology update

Registered models are now referred to as collections.

Summary of changes

Legacy W&B Model Registry W&B Registry
Artifact visibility Only members of team can view or access artifacts Members in your organization, with correct permissions, can view or access artifacts linked to a registry
Custom access control Not available Available
Custom registry Not available Available
Terminology update A set of pointers (links) to model versions are called registered models. A set of pointers (links) to artifact versions are called collections.
wandb.init.link_model Model Registry specific API Currently only compatible with legacy model registry

Preparing for the migration

W&B will migrate registered models (now called collections) and associated artifact versions from the legacy Model Registry to the W&B Registry. This process will be conducted automatically, with no action required from users.

Team visibility to organization visibility

After the migration, your model registry will have organization level visibility. You can restrict who has access to a registry by assigning roles. This helps ensure that only specific members have access to specific registries.

The migration will preserve existing permission boundaries of your current team-level registered models (soon to be called collections) in the legacy W&B Model Registry. Permissions currently defined in the legacy Model Registry will be preserved in the new Registry. This means that collections currently restricted to specific team members will remain protected during and after the migration.

Artifact path continuity

No action is currently required.

During the migration

W&B will initiate the migration process. The migration will occur during a time window that minimizes disruption to W&B services. The legacy Model Registry will transition to a read-only state once the migration begins and will remain accessible for reference.

After the migration

Post-migration, collections, artifact versions, and associated attributes will be fully accessible within the new W&B Registry. The focus is on ensuring that current workflows remain intact, with ongoing support available to help navigate any changes.

Using the new registry

Users are encouraged to explore the new features and capabilities available in the W&B Registry. The Registry will not only support the functionalities currently relied upon but also introduces enhancements such as custom registries, improved visibility, and flexible access controls.

Support is available if you are interested in trying the W&B Registry early, or for new users that prefer to start with Registry and not the legacy W&B Model Registry. Contact support@wandb.com or your Sales MLE to enable this functionality. Note that any early migration will be into a BETA version. The BETA version of W&B Registry might not have all the functionality or features of the legacy Model Registry.

For more details and to learn about the full range of features in the W&B Registry, visit the W&B Registry Guide.

FAQs

Why is W&B migrating assets from Model Registry to W&B Registry?

W&B is evolving its platform to offer more advanced features and capabilities with the new Registry. This migration is a step towards providing a more integrated and powerful toolset for managing models, datasets, and other artifacts.

What needs to be done before the migration?

No action is required from users before the migration. W&B will handle the transition, ensuring that workflows and references are preserved.

Will access to model artifacts be lost?

No, access to model artifacts will be retained after the migration. The legacy Model Registry will remain in a read-only state, and all relevant data will be migrated to the new Registry.

Yes, important metadata related to artifact creation, lineage, and other attributes will be preserved during the migration. Users will continue to have access to all relevant metadata after the migration, ensuring that the integrity and traceability of their artifacts remain intact.

Who do I contact if I need help?

Support is available for any questions or concerns. Reach out to support@wandb.com for assistance.

3.9 - Model registry

Model registry to manage the model lifecycle from training to production

The W&B Model Registry houses a team’s trained models where ML Practitioners can publish candidates for production to be consumed by downstream teams and stakeholders. It is used to house staged/candidate models and manage workflows associated with staging.

With W&B Model Registry, you can:

How it works

Track and manage your staged models with a few simple steps.

  1. Log a model version: In your training script, add a few lines of code to save the model files as an artifact to W&B.
  2. Compare performance: Check live charts to compare the metrics and sample predictions from model training and validation. Identify which model version performed the best.
  3. Link to registry: Bookmark the best model version by linking it to a registered model, either programmatically in Python or interactively in the W&B UI.

The following code snippet demonstrates how to log and link a model to the Model Registry:

import wandb
import random

# Start a new W&B run
run = wandb.init(project="models_quickstart")

# Simulate logging model metrics
run.log({"acc": random.random()})

# Create a simulated model file
with open("my_model.h5", "w") as f:
    f.write("Model: " + str(random.random()))

# Log and link the model to the Model Registry
run.link_model(path="./my_model.h5", registered_model_name="MNIST")

run.finish()
  1. Connect model transitions to CI/DC workflows: transition candidate models through workflow stages and automate downstream actions with webhooks or jobs.

How to get started

Depending on your use case, explore the following resources to get started with W&B Models:

3.9.1 - Tutorial: Use W&B for model management

Learn how to use W&B for Model Management

The following walkthrough shows you how to log a model to W&B. By the end of the walkthrough you will:

  • Create and train a model with the MNIST dataset and the Keras framework.
  • Log the model that you trained to a W&B project
  • Mark the dataset used as a dependency to the model you created
  • Link the model to the W&B Registry.
  • Evaluate the performance of the model you link to the registry
  • Mark a model version ready for production.

Setting up

Before you get started, import the Python dependencies required for this walkthrough:

import wandb
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from wandb.integration.keras import WandbMetricsLogger
from sklearn.model_selection import train_test_split

Provide your W&B entity to the entity variable:

entity = "<entity>"

Create a dataset artifact

First, create a dataset. The proceeding code snippet creates a function that downloads the MNIST dataset:

def generate_raw_data(train_size=6000):
    eval_size = int(train_size / 6)
    (x_train, y_train), (x_eval, y_eval) = keras.datasets.mnist.load_data()

    x_train = x_train.astype("float32") / 255
    x_eval = x_eval.astype("float32") / 255
    x_train = np.expand_dims(x_train, -1)
    x_eval = np.expand_dims(x_eval, -1)

    print("Generated {} rows of training data.".format(train_size))
    print("Generated {} rows of eval data.".format(eval_size))

    return (x_train[:train_size], y_train[:train_size]), (
        x_eval[:eval_size],
        y_eval[:eval_size],
    )

# Create dataset
(x_train, y_train), (x_eval, y_eval) = generate_raw_data()

Next, upload the dataset to W&B. To do this, create an artifact object and add the dataset to that artifact.

project = "model-registry-dev"

model_use_case_id = "mnist"
job_type = "build_dataset"

# Initialize a W&B run
run = wandb.init(entity=entity, project=project, job_type=job_type)

# Create W&B Table for training data
train_table = wandb.Table(data=[], columns=[])
train_table.add_column("x_train", x_train)
train_table.add_column("y_train", y_train)
train_table.add_computed_columns(lambda ndx, row: {"img": wandb.Image(row["x_train"])})

# Create W&B Table for eval data
eval_table = wandb.Table(data=[], columns=[])
eval_table.add_column("x_eval", x_eval)
eval_table.add_column("y_eval", y_eval)
eval_table.add_computed_columns(lambda ndx, row: {"img": wandb.Image(row["x_eval"])})

# Create an artifact object
artifact_name = "{}_dataset".format(model_use_case_id)
artifact = wandb.Artifact(name=artifact_name, type="dataset")

# Add wandb.WBValue obj to the artifact.
artifact.add(train_table, "train_table")
artifact.add(eval_table, "eval_table")

# Persist any changes made to the artifact.
artifact.save()

# Tell W&B this run is finished.
run.finish()

Train a model

Train a model with the artifact dataset you created in the previous step.

Declare dataset artifact as an input to the run

Declare the dataset artifact you created in a previous step as the input to the W&B run. This is particularly useful in the context of logging models because declaring an artifact as an input to a run lets you track the dataset (and the version of the dataset) used to train a specific model. W&B uses the information collected to create a lineage map.

Use the use_artifact API to both declare the dataset artifact as the input of the run and to retrieve the artifact itself.

job_type = "train_model"
config = {
    "optimizer": "adam",
    "batch_size": 128,
    "epochs": 5,
    "validation_split": 0.1,
}

# Initialize a W&B run
run = wandb.init(project=project, job_type=job_type, config=config)

# Retrieve the dataset artifact
version = "latest"
name = "{}:{}".format("{}_dataset".format(model_use_case_id), version)
artifact = run.use_artifact(artifact_or_name=name)

# Get specific content from the dataframe
train_table = artifact.get("train_table")
x_train = train_table.get_column("x_train", convert_to="numpy")
y_train = train_table.get_column("y_train", convert_to="numpy")

For more information about tracking the inputs and output of a model, see Create model lineage map.

Define and train model

For this walkthrough, define a 2D Convolutional Neural Network (CNN) with Keras to classify images from the MNIST dataset.

Train CNN on MNIST data
# Store values from our config dictionary into variables for easy accessing
num_classes = 10
input_shape = (28, 28, 1)
loss = "categorical_crossentropy"
optimizer = run.config["optimizer"]
metrics = ["accuracy"]
batch_size = run.config["batch_size"]
epochs = run.config["epochs"]
validation_split = run.config["validation_split"]

# Create model architecture
model = keras.Sequential(
    [
        layers.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)

# Generate labels for training data
y_train = keras.utils.to_categorical(y_train, num_classes)

# Create training and test set
x_t, x_v, y_t, y_v = train_test_split(x_train, y_train, test_size=0.33)

Next, train the model:

# Train the model
model.fit(
    x=x_t,
    y=y_t,
    batch_size=batch_size,
    epochs=epochs,
    validation_data=(x_v, y_v),
    callbacks=[WandbCallback(log_weights=True, log_evaluation=True)],
)

Finally, save the model locally on your machine:

# Save model locally
path = "model.h5"
model.save(path)

Use the link_model API to log model one ore more files to a W&B run and link it to the W&B Model Registry.

path = "./model.h5"
registered_model_name = "MNIST-dev"

run.link_model(path=path, registered_model_name=registered_model_name)
run.finish()

W&B creates a registered model for you if the name you specify for registered-model-name does not already exist.

See link_model in the API Reference guide for more information on optional parameters.

Evaluate the performance of a model

It is common practice to evaluate the performance of a one or more models.

First, get the evaluation dataset artifact stored in W&B in a previous step.

job_type = "evaluate_model"

# Initialize a run
run = wandb.init(project=project, entity=entity, job_type=job_type)

model_use_case_id = "mnist"
version = "latest"

# Get dataset artifact, mark it as a dependency
artifact = run.use_artifact(
    "{}:{}".format("{}_dataset".format(model_use_case_id), version)
)

# Get desired dataframe
eval_table = artifact.get("eval_table")
x_eval = eval_table.get_column("x_eval", convert_to="numpy")
y_eval = eval_table.get_column("y_eval", convert_to="numpy")

Download the model version from W&B that you want to evaluate. Use the use_model API to access and download your model.

alias = "latest"  # alias
name = "mnist_model"  # name of the model artifact

# Access and download model. Returns path to downloaded artifact
downloaded_model_path = run.use_model(name=f"{name}:{alias}")

Load the Keras model and compute the loss:

model = keras.models.load_model(downloaded_model_path)

y_eval = keras.utils.to_categorical(y_eval, 10)
(loss, _) = model.evaluate(x_eval, y_eval)
score = (loss, _)

Finally, log the loss metric to the W&B run:

# # Log metrics, images, tables, or any data useful for evaluation.
run.log(data={"loss": (loss, _)})

Promote a model version

Mark a model version ready for the next stage of your machine learning workflow with a model alias. Each registered model can have one or more model aliases. A model alias can only belong to a single model version at a time.

For example, suppose that after evaluating a model’s performance, you are confident that the model is ready for production. To promote that model version, add the production alias to that specific model version.

You can add an alias to a model version interactively with the W&B App UI or programmatically with the Python SDK. The following steps show how to add an alias with the W&B Model Registry App:

  1. Navigate to the Model Registry App at https://wandb.ai/registry/model.
  2. Click View details next to the name of your registered model.
  3. Within the Versions section, click the View button next to the name of the model version you want to promote.
  4. Next to the Aliases field, click the plus icon (+).
  5. Type in production into the field that appears.
  6. Press Enter on your keyboard.

3.9.2 - Model Registry Terms and Concepts

Model Registry terms and concepts

The following terms describe key components of the W&B Model Registry: model version, model artifact, and registered model.

Model version

A model version represents a single model checkpoint. Model versions are a snapshot at a point in time of a model and its files within an experiment.

A model version is an immutable directory of data and metadata that describes a trained model. W&B suggests that you add files to your model version that let you store (and restore) your model architecture and learned parameters at a later date.

A model version belongs to one, and only one, model artifact. A model version can belong to zero or more, registered models. Model versions are stored in a model artifact in the order they are logged to the model artifact. W&B automatically creates a new model version if it detects that a model you log (to the same model artifact) has different contents than a previous model version.

Store files within model versions that are produced from the serialization process provided by your modeling library (for example, PyTorch and Keras).

Model alias

Model aliases are mutable strings that allow you to uniquely identify or reference a model version in your registered model with a semantically related identifier. You can only assign an alias to one version of a registered model. This is because an alias should refer to a unique version when used programmatically. It also allows aliases to be used to capture a model’s state (champion, candidate, production).

It is common practice to use aliases such as "best", "latest", "production", or "staging" to mark model versions with special purposes.

For example, suppose you create a model and assign it a "best" alias. You can refer to that specific model with run.use_model

import wandb
run = wandb.init()
name = f"{entity/project/model_artifact_name}:{alias}"
run.use_model(name=name)

Model tags

Model tags are keywords or labels that belong to one or more registered models.

Use model tags to organize registered models into categories and to search over those categories in the Model Registry’s search bar. Model tags appear at the top of the Registered Model Card. You might choose to use them to group your registered models by ML task, owning team, or priority. The same model tag can be added to multiple registered models to allow for grouping.

Model artifact

A model artifact is a collection of logged model versions. Model versions are stored in a model artifact in the order they are logged to the model artifact.

A model artifact can contain one or more model versions. A model artifact can be empty if no model versions are logged to it.

For example, suppose you create a model artifact. During model training, you periodically save your model during checkpoints. Each checkpoint corresponds to its own model version. All of the model versions created during your model training and checkpoint saving are stored in the same model artifact you created at the beginning of your training script.

The proceeding image shows a model artifact that contains three model versions: v0, v1, and v2.

View an example model artifact here.

Registered model

A registered model is a collection of pointers (links) to model versions. You can think of a registered model as a folder of “bookmarks” of candidate models for the same ML task. Each “bookmark” of a registered model is a pointer to a model version that belongs to a model artifact. You can use model tags to group your registered models.

Registered models often represent candidate models for a single modeling use case or task. For example, you might create registered model for different image classification task based on the model you use: ImageClassifier-ResNet50, ImageClassifier-VGG16, DogBreedClassifier-MobileNetV2 and so on. Model versions are assigned version numbers in the order in which they were linked to the registered model.

View an example Registered Model here.

3.9.3 - Track a model

Track a model, the model’s dependencies, and other information relevant to that model with the W&B Python SDK.

Track a model, the model’s dependencies, and other information relevant to that model with the W&B Python SDK.

Under the hood, W&B creates a lineage of model artifact that you can view with the W&B App UI or programmatically with the W&B Python SDK. See the Create model lineage map for more information.

How to log a model

Use the run.log_model API to log a model. Provide the path where your model files are saved to the path parameter. The path can be a local file, directory, or reference URI to an external bucket such as s3://bucket/path.

Optionally provide a name for the model artifact for the name parameter. If name is not specified, W&B uses the basename of the input path prepended with the run ID.

Copy and paste the proceeding code snippet. Ensure to replace values enclosed in <> with your own.

import wandb

# Initialize a W&B run
run = wandb.init(project="<project>", entity="<entity>")

# Log the model
run.log_model(path="<path-to-model>", name="<name>")
Example: Log a Keras model to W&B

The proceeding code example shows how to log a convolutional neural network (CNN) model to W&B.

import os
import wandb
from tensorflow import keras
from tensorflow.keras import layers

config = {"optimizer": "adam", "loss": "categorical_crossentropy"}

# Initialize a W&B run
run = wandb.init(entity="charlie", project="mnist-project", config=config)

# Training algorithm
loss = run.config["loss"]
optimizer = run.config["optimizer"]
metrics = ["accuracy"]
num_classes = 10
input_shape = (28, 28, 1)

model = keras.Sequential(
    [
        layers.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)

model.compile(loss=loss, optimizer=optimizer, metrics=metrics)

# Save model
model_filename = "model.h5"
local_filepath = "./"
full_path = os.path.join(local_filepath, model_filename)
model.save(filepath=full_path)

# Log the model
# highlight-next-line
run.log_model(path=full_path, name="MNIST")

# Explicitly tell W&B to end the run.
run.finish()

3.9.4 - Create a registered model

Create a registered model to hold all the candidate models for your modeling tasks.

Create a registered model to hold all the candidate models for your modeling tasks. You can create a registered model interactively within the Model Registry or programmatically with the Python SDK.

Programmatically create registered a model

Programmatically register a model with the W&B Python SDK. W&B automatically creates a registered model for you if the registered model doesn’t exist.

Ensure to replace other the values enclosed in <> with your own:

import wandb

run = wandb.init(entity="<entity>", project="<project>")
run.link_model(path="<path-to-model>", registered_model_name="<registered-model-name>")
run.finish()

The name you provide for registered_model_name is the name that appears in the Model Registry App.

Interactively create a registered model

Interactively create a registered model within the Model Registry App.

  1. Navigate to the Model Registry App at https://wandb.ai/registry/model.
  2. Click the New registered model button located in the top right of the Model Registry page.
  3. From the panel that appears, select the entity you want the registered model to belong to from the Owning Entity dropdown.
  4. Provide a name for your model in the Name field.
  5. From the Type dropdown, select the type of artifacts to link to the registered model.
  6. (Optional) Add a description about your model in the Description field.
  7. (Optional) Within the Tags field, add one or more tags.
  8. Click Register model.

3.9.5 - Link a model version

Link a model version to a registered model with the W&B App or programmatically with the Python SDK.

Link a model version to a registered model with the W&B App or programmatically with the Python SDK.

Use the link_model method to programmatically log model files to a W&B run and link it to the W&B Model Registry.

Ensure to replace other the values enclosed in <> with your own:

import wandb

run = wandb.init(entity="<entity>", project="<project>")
run.link_model(path="<path-to-model>", registered_model_name="<registered-model-name>")
run.finish()

W&B creates a registered model for you if the name you specify for the registered-model-name parameter does not already exist.

For example, suppose you have an existing registered model named “Fine-Tuned-Review-Autocompletion”(registered-model-name="Fine-Tuned-Review-Autocompletion") in your Model Registry. And suppose that a few model versions are linked to it: v0, v1, v2. If you programmatically link a new model and use the same registered model name (registered-model-name="Fine-Tuned-Review-Autocompletion"), W&B links this model to the existing registered model and assigns it a model version v3. If no registered model with this name exists, a new one registered model is created and it will have a model version v0.

See an example “Fine-Tuned-Review-Autocompletion” registered model here.

Interactively link a model with the Model Registry or with the Artifact browser.

  1. Navigate to the Model Registry App at https://wandb.ai/registry/model.
  2. Hover your mouse next to the name of the registered model you want to link a new model to.
  3. Select the meatball menu icon (three horizontal dots) next to View details.
  4. From the dropdown, select Link new version.
  5. From the Project dropdown, select the name of the project that contains your model.
  6. From the Model Artifact dropdown, select the name of the model artifact.
  7. From the Version dropdown, select the model version you want to link to the registered model.
  1. Navigate to your project’s artifact browser on the W&B App at: https://wandb.ai/<entity>/<project>/artifacts
  2. Select the Artifacts icon on the left sidebar.
  3. Click on the model version you want to link to your registry.
  4. Within the Version overview section, click the Link to registry button.
  5. From the modal that appears on the right of the screen, select a registered model from the Select a register model menu dropdown.
  6. Click Next step.
  7. (Optional) Select an alias from the Aliases dropdown.
  8. Click Link to registry.

View the source of linked models

There are two ways to view the source of linked models: The artifact browser within the project that the model is logged to and the W&B Model Registry.

A pointer connects a specific model version in the model registry to the source model artifact (located within the project the model is logged to). The source model artifact also has a pointer to the model registry.

  1. Navigate to your model registry at https://wandb.ai/registry/model.
  2. Select View details next the name of your registered model.
  3. Within the Versions section, select View next to the model version you want to investigate.
  4. Click on the Version tab within the right panel.
  5. Within the Version overview section there is a row that contains a Source Version field. The Source Version field shows both the name of the model and the model’s version.

For example, the following image shows a v0 model version called mnist_model (see Source version field mnist_model:v0), linked to a registered model called MNIST-dev.

  1. Navigate to your project’s artifact browser on the W&B App at: https://wandb.ai/<entity>/<project>/artifacts
  2. Select the Artifacts icon on the left sidebar.
  3. Expand the model dropdown menu from the Artifacts panel.
  4. Select the name and version of the model linked to the model registry.
  5. Click on the Version tab within the right panel.
  6. Within the Version overview section there is a row that contains a Linked To field. The Linked To field shows both the name of the registered model and the version it possesses(registered-model-name:version).

For example, in the following image, there is a registered model called MNIST-dev (see the Linked To field). A model version called mnist_model with a version v0(mnist_model:v0) points to the MNIST-dev registered model.

3.9.6 - Organize models

Use model tags to organize registered models into categories and to search over those categories.

  1. Navigate to the W&B Model Registry app at https://wandb.ai/registry/model.

  2. Select View details next to the name of the registered model you want to add a model tag to.

  3. Scroll to the Model card section.

  4. Click the plus button (+) next to the Tags field.

  5. Type in the name for your tag or search for a pre-existing model tag. For example. the following image shows multiple model tags added to a registered model called FineTuned-Review-Autocompletion:

3.9.7 - Create model lineage map

A useful feature of logging model artifacts to W&B are lineage graphs. Lineage graphs show artifacts logged by a run as well as artifacts used by specific run.

This means that, when you log a model artifact, you at a minimum have access to view the W&B run that used or produced the model artifact. If you track a dependency, you also see the inputs used by the model artifact.

For example, the proceeding image shows artifacts created and used throughout an ML experiment:

From left to right, the image shows:

  1. The jumping-monkey-1 W&B run created the mnist_dataset:v0 dataset artifact.
  2. The vague-morning-5 W&B run trained a model using the mnist_dataset:v0 dataset artifact. The output of this W&B run was a model artifact called mnist_model:v0.
  3. A run called serene-haze-6 used the model artifact (mnist_model:v0) to evaluate the model.

Track an artifact dependency

Declare an dataset artifact as an input to a W&B run with the use_artifact API to track a dependency.

The proceeding code snippet shows how to use the use_artifact API:

# Initialize a run
run = wandb.init(project=project, entity=entity)

# Get artifact, mark it as a dependency
artifact = run.use_artifact(artifact_or_name="name", aliases="<alias>")

Once you have retrieved your artifact, you can use that artifact to (for example), evaluate the performance of a model.

Example: Train a model and track a dataset as the input of a model
job_type = "train_model"

config = {
    "optimizer": "adam",
    "batch_size": 128,
    "epochs": 5,
    "validation_split": 0.1,
}

run = wandb.init(project=project, job_type=job_type, config=config)

version = "latest"
name = "{}:{}".format("{}_dataset".format(model_use_case_id), version)

# highlight-start
artifact = run.use_artifact(name)
# highlight-end

train_table = artifact.get("train_table")
x_train = train_table.get_column("x_train", convert_to="numpy")
y_train = train_table.get_column("y_train", convert_to="numpy")

# Store values from our config dictionary into variables for easy accessing
num_classes = 10
input_shape = (28, 28, 1)
loss = "categorical_crossentropy"
optimizer = run.config["optimizer"]
metrics = ["accuracy"]
batch_size = run.config["batch_size"]
epochs = run.config["epochs"]
validation_split = run.config["validation_split"]

# Create model architecture
model = keras.Sequential(
    [
        layers.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)

# Generate labels for training data
y_train = keras.utils.to_categorical(y_train, num_classes)

# Create training and test set
x_t, x_v, y_t, y_v = train_test_split(x_train, y_train, test_size=0.33)

# Train the model
model.fit(
    x=x_t,
    y=y_t,
    batch_size=batch_size,
    epochs=epochs,
    validation_data=(x_v, y_v),
    callbacks=[WandbCallback(log_weights=True, log_evaluation=True)],
)

# Save model locally
path = "model.h5"
model.save(path)

path = "./model.h5"
registered_model_name = "MNIST-dev"
name = "mnist_model"

# highlight-start
run.link_model(path=path, registered_model_name=registered_model_name, name=name)
# highlight-end
run.finish()

3.9.8 - Document machine learning model

Add descriptions to model card to document your model

Add a description to the model card of your registered model to document aspects of your machine learning model. Some topics worth documenting include:

  • Summary: A summary of what the model is. The purpose of the model. The machine learning framework the model uses, and so forth.
  • Training data: Describe the training data used, processing done on the training data set, where is that data stored and so forth.
  • Architecture: Information about the model architecture, layers, and any specific design choices.
  • Deserialize the model: Provide information on how someone on your team can load the model into memory.
  • Task: The specific type of task or problem that the machine learning model is designed to perform. It’s a categorization of the model’s intended capability.
  • License: The legal terms and permissions associated with the use of the machine learning model. It helps model users understand the legal framework under which they can utilize the model.
  • References: Citations or references to relevant research papers, datasets, or external resources.
  • Deployment: Details on how and where the model is deployed and guidance on how the model is integrated into other enterprise systems, such as a workflow orchestration platforms.

Add a description to the model card

  1. Navigate to the W&B Model Registry app at https://wandb.ai/registry/model.
  2. Select View details next to the name of the registered model you want to create a model card for.
  3. Go to the Model card section.
  4. Within the Description field, provide information about your machine learning model. Format text within a model card with Markdown markup language.

For example, the following images shows the model card of a Credit-card Default Prediction registered model.

3.9.9 - Download a model version

How to download a model with W&B Python SDK

Use the W&B Python SDK to download a model artifact that you linked to the Model Registry.

Replace values within <> with your own:

import wandb

# Initialize a run
run = wandb.init(project="<project>", entity="<entity>")

# Access and download model. Returns path to downloaded artifact
downloaded_model_path = run.use_model(name="<your-model-name>")

Reference a model version with one of following formats listed:

  • latest - Use latest alias to specify the model version that is most recently linked.
  • v# - Use v0, v1, v2, and so on to fetch a specific version in the Registered Model
  • alias - Specify the custom alias that you and your team assigned to your model version

See use_model in the API Reference guide for more information on possible parameters and return type.

Example: Download and use a logged model

For example, in the proceeding code snippet a user called the use_model API. They specified the name of the model artifact they want to fetch and they also provided a version/alias. They then stored the path that returned from the API to the downloaded_model_path variable.

import wandb

entity = "luka"
project = "NLP_Experiments"
alias = "latest"  # semantic nickname or identifier for the model version
model_artifact_name = "fine-tuned-model"

# Initialize a run
run = wandb.init()
# Access and download model. Returns path to downloaded artifact

downloaded_model_path = run.use_model(name=f"{entity/project/model_artifact_name}:{alias}")

Replace values within <> with your own:

import wandb
# Initialize a run
run = wandb.init(project="<project>", entity="<entity>")
# Access and download model. Returns path to downloaded artifact
downloaded_model_path = run.use_model(name="<your-model-name>")

Reference a model version with one of following formats listed:

  • latest - Use latest alias to specify the model version that is most recently linked.
  • v# - Use v0, v1, v2, and so on to fetch a specific version in the Registered Model
  • alias - Specify the custom alias that you and your team assigned to your model version

See use_model in the API Reference guide for more information on possible parameters and return type.

  1. Navigate to the Model Registry App at https://wandb.ai/registry/model.
  2. Select View details next to the name of the registered model that contains the model you want to download.
  3. Within the Versions section, select the View button next to the model version you want to download.
  4. Select the Files tab.
  5. Click on the download button next to the model file you want to download.

3.9.10 - Create alerts and notifications

Get Slack notifications when a new model version is linked to the model registry.

Receive Slack notifications when a new model version is linked to the model registry.

  1. Navigate to the W&B Model Registry app at https://wandb.ai/registry/model.
  2. Select the registered model you want to receive notifications from.
  3. Click on the Connect Slack button.
  4. Follow the instructions to enable W&B in your Slack workspace that appear on the OAuth page.

Once you have configured Slack notifications for your team, you can pick and choose registered models to get notifications from.

The screenshot below shows a FMNIST classifier registered model that has Slack notifications.

A message is automatically posted to the connected Slack channel each time a new model version is linked to the FMNIST classifier registered model.

3.9.11 - Manage data governance and access control

Use model registry role based access controls (RBAC) to control who can update protected aliases.

Use protected aliases to represent key stages of your model development pipeline. Only Model Registry Administrators can add, modify, or remove protected aliases. Model registry admins can define and use protected aliases. W&B blocks non admin users from adding or removing protected aliases from model versions.

For example, suppose you set staging and production as protected aliases. Any member of your team can add new model versions. However, only admins can add a staging or production alias.

Set up access control

The following steps describe how to set up access controls for your team’s model registry.

  1. Navigate to the W&B Model Registry app at https://wandb.ai/registry/model.
  2. Select the gear button on the top right of the page.
  3. Select the Manage registry admins button.
  4. Within the Members tab, select the users you want to grant access to add and remove protected aliases from model versions.

Add protected aliases

  1. Navigate to the W&B Model Registry app at https://wandb.ai/registry/model.
  2. Select the gear button on the top right of the page.
  3. Scroll down to the Protected Aliases section.
  4. Click on the plus icon (+) icon to add new a new alias.

4 - Automations

4.1 - Model registry automations

Use an Automation for model CI (automated model evaluation pipelines) and model deployment.

Create an automation to trigger workflow steps, such as automated model testing and deployment. To create an automation, define the action you want to occur based on an event type.

For example, you can create a trigger that automatically deploys a model to GitHub when you add a new version of a registered model.

Event types

An event is a change that takes place in the W&B ecosystem. The Model Registry supports two event types:

  • Use Linking a new artifact to a registered model to test new model candidates.
  • Use Adding a new alias to a version of the registered model to specify an alias that represents a special step of your workflow, like deploy, and any time a new model version has that alias applied.

See Link a model version and Create a custom alias.

Create a webhook automation

Automate a webhook based on an action with the W&B App UI. To do this, first establish a webhook, then configure the webhook automation.

Add a secret for authentication or authorization

Secrets are team-level variables that let you obfuscate private strings such as credentials, API keys, passwords, tokens, and more. W&B recommends you use secrets to store any string that you want to protect the plain text content of.

To use a secret in your webhook, you must first add that secret to your team’s secret manager.

There are two types of secrets W&B suggests that you create when you use a webhook automation:

  • Access tokens: Authorize senders to help secure webhook requests
  • Secret: Ensure the authenticity and integrity of data transmitted from payloads

Follow the instructions below to create a webhook:

  1. Navigate to the W&B App UI.
  2. Click on Team Settings.
  3. Scroll down the page until you find the Team secrets section.
  4. Click on the New secret button.
  5. A modal will appear. Provide a name for your secret in the Secret name field.
  6. Add your secret into the Secret field.
  7. (Optional) Repeat steps 5 and 6 to create another secret (such as an access token) if your webhook requires additional secret keys or tokens to authenticate your webhook.

Specify the secrets you want to use for your webhook automation when you configure the webhook. See the Configure a webhook section for more information.

Configure a webhook

Before you can use a webhook, first configure that webhook in the W&B App UI.

  1. Navigate to the W&B App UI.
  2. Click on Team Settings.
  3. Scroll down the page until you find the Webhooks section.
  4. Click on the New webhook button.
  5. Provide a name for your webhook in the Name field.
  6. Provide the endpoint URL for the webhook in the URL field.
  7. (Optional) From the Secret dropdown menu, select the secret you want to use to authenticate the webhook payload.
  8. (Optional) From the Access token dropdown menu, select the access token you want to use to authorize the sender.
  9. (Optional) From the Access token dropdown menu select additional secret keys or tokens required to authenticate a webhook (such as an access token).

Add a webhook

Once you have a webhook configured and (optionally) a secret, navigate to the Model Registry App at https://wandb.ai/registry/model.

  1. From the Event type dropdown, select an event type.
  2. (Optional) If you selected A new version is added to a registered model event, provide the name of a registered model from the Registered model dropdown.
  3. Select Webhooks from the Action type dropdown.
  4. Click on the Next step button.
  5. Select a webhook from the Webhook dropdown.
  6. (Optional) Provide a payload in the JSON expression editor. See the Example payload section for common use case examples.
  7. Click on Next step.
  8. Provide a name for your webhook automation in the Automation name field.
  9. (Optional) Provide a description for your webhook.
  10. Click on the Create automation button.

Example payloads

The following tabs demonstrate example payloads based on common use cases. Within the examples they reference the following keys to refer to condition objects in the payload parameters:

  • ${event_type} Refers to the type of event that triggered the action.
  • ${event_author} Refers to the user that triggered the action.
  • ${artifact_version} Refers to the specific artifact version that triggered the action. Passed as an artifact instance.
  • ${artifact_version_string} Refers to the specific artifact version that triggered the action. Passed as a string.
  • ${artifact_collection_name} Refers to the name of the artifact collection that the artifact version is linked to.
  • ${project_name} Refers to the name of the project owning the mutation that triggered the action.
  • ${entity_name} Refers to the name of the entity owning the mutation that triggered the action.

Send a repository dispatch from W&B to trigger a GitHub action. For example, suppose you have workflow that accepts a repository dispatch as a trigger for the on key:

on:
repository_dispatch:
  types: BUILD_AND_DEPLOY

The payload for the repository might look something like:

{
  "event_type": "BUILD_AND_DEPLOY",
  "client_payload": 
  {
    "event_author": "${event_author}",
    "artifact_version": "${artifact_version}",
    "artifact_version_string": "${artifact_version_string}",
    "artifact_collection_name": "${artifact_collection_name}",
    "project_name": "${project_name}",
    "entity_name": "${entity_name}"
    }
}

The contents and positioning of rendered template strings depends on the event or model version the automation is configured for. ${event_type} will render as either LINK_ARTIFACT or ADD_ARTIFACT_ALIAS. See below for an example mapping:

${event_type} --> "LINK_ARTIFACT" or "ADD_ARTIFACT_ALIAS"
${event_author} --> "<wandb-user>"
${artifact_version} --> "wandb-artifact://_id/QXJ0aWZhY3Q6NTE3ODg5ODg3""
${artifact_version_string} --> "<entity>/model-registry/<registered_model_name>:<alias>"
${artifact_collection_name} --> "<registered_model_name>"
${project_name} --> "model-registry"
${entity_name} --> "<entity>"

Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume the registered model artifacts through the W&B API.

For more information about repository dispatch, see the official documentation on the GitHub Marketplace.

Watch the videos Webhook Automations for Model Evaluation and Webhook Automations for Model Deployment, which guide you to create automations for model evaluation and deployment.

Review a W&B report, which illustrates how to use a Github Actions webhook automation for Model CI. Check out this GitHub repository to learn how to create model CI with a Modal Labs webhook.

Configure an ‘Incoming Webhook’ to get the webhook URL for your Teams Channel by configuring. The following is an example payload:

{
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"summary": "New Notification",
"sections": [
  {
    "activityTitle": "Notification from WANDB",
    "text": "This is an example message sent via Teams webhook.",
    "facts": [
      {
        "name": "Author",
        "value": "${event_author}"
      },
      {
        "name": "Event Type",
        "value": "${event_type}"
      }
    ],
    "markdown": true
  }
]
}

You can use template strings to inject W&B data into your payload at the time of execution (as shown in the Teams example above).

Setup your Slack app and add an incoming webhook integration with the instructions highlighted in the Slack API documentation. Ensure that you have the secret specified under Bot User OAuth Token as your W&B webhook’s access token.

The following is an example payload:

  {
      "text": "New alert from WANDB!",
  "blocks": [
      {
              "type": "section",
          "text": {
              "type": "mrkdwn",
              "text": "Registry event: ${event_type}"
          }
      },
          {
              "type":"section",
              "text": {
              "type": "mrkdwn",
              "text": "New version: ${artifact_version_string}"
          }
          },
          {
          "type": "divider"
      },
          {
              "type": "section",
          "text": {
              "type": "mrkdwn",
              "text": "Author: ${event_author}"
          }
          }
      ]
  }

Troubleshoot your webhook

Interactively troubleshoot your webhook with the W&B App UI or programmatically with a Bash script. You can troubleshoot a webhook when you create a new webhook or edit an existing webhook.

Interactively test a webhook with the W&B App UI.

  1. Navigate to your W&B Team Settings page.
  2. Scroll to the Webhooks section.
  3. Click on the horizontal three docs (meatball icon) next to the name of your webhook.
  4. Select Test.
  5. From the UI panel that appears, paste your POST request to the field that appears.
  6. Click on Test webhook.

Within the W&B App UI, W&B posts the response made by your endpoint.

Watch the video Testing webhooks in Weights & Biases for a real-world example.

The following bash script generates a POST request similar to the POST request W&B sends to your webhook automation when it is triggered.

Copy and paste the code below into a shell script to troubleshoot your webhook. Specify your own values for the following:

  • ACCESS_TOKEN
  • SECRET
  • PAYLOAD
  • API_ENDPOINT
#!/bin/bash

# Your access token and secret
ACCESS_TOKEN="your_api_key" 
SECRET="your_api_secret"

# The data you want to send (for example, in JSON format)
PAYLOAD='{"key1": "value1", "key2": "value2"}'

# Generate the HMAC signature
# For security, Wandb includes the X-Wandb-Signature in the header computed 
# from the payload and the shared secret key associated with the webhook 
# using the HMAC with SHA-256 algorithm.
SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$SECRET" -binary | base64)

# Make the cURL request
curl -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $ACCESS_TOKEN" \
  -H "X-Wandb-Signature: $SIGNATURE" \
  -d "$PAYLOAD" API_ENDPOINT

View automation

View automations associated to a registered model from the W&B App UI.

  1. Navigate to the Model Registry App at https://wandb.ai/registry/model.
  2. Select on a registered model.
  3. Scroll to the bottom of the page to the Automations section.

Within the Automations section you can find the following properties of automations created for the model you selected:

  • Trigger type: The type of trigger that was configured.
  • Action type: The action type that triggers the automation.
  • Action name: The action name you provided when you created the automation.
  • Queue: The name of the queue the job was enqueued to. This field is left empty if you selected a webhook action type.

Delete an automation

Delete an automation associated with a model. Actions in progress are not affected if you delete that automation before the action completes.

  1. Navigate to the Model Registry App at https://wandb.ai/registry/model.
  2. Click on a registered model.
  3. Scroll to the bottom of the page to the Automations section.
  4. Hover your mouse next to the name of the automation and click on the kebob (three vertical dots) menu.
  5. Select Delete.

4.2 - Trigger CI/CD events when artifact changes

Use an project scoped artifact automation in your project to trigger actions when aliases or versions in an artifact collection are created or changed.

Create an automation that triggers when an artifact is changed. Use artifact automations when you want to automate downstream actions for versioning artifacts. To create an automation, define the action you want to occur based on an event type.

Event types

An event is a change that takes place in the W&B ecosystem. You can define two different event types for artifact collections in your project: A new version of an artifact is created in a collection and An artifact alias is added.

Create a webhook automation

Automate a webhook based on an action with the W&B App UI. To do this, you will first establish a webhook, then you will configure the webhook automation.

Add a secret for authentication or authorization

Secrets are team-level variables that let you obfuscate private strings such as credentials, API keys, passwords, tokens, and more. W&B recommends you use secrets to store any string that you want to protect the plain text content of.

To use a secret in your webhook, you must first add that secret to your team’s secret manager.

There are two types of secrets W&B suggests that you create when you use a webhook automation:

  • Access tokens: Authorize senders to help secure webhook requests
  • Secret: Ensure the authenticity and integrity of data transmitted from payloads

Follow the instructions below to create a webhook:

  1. Navigate to the W&B App UI.
  2. Click on Team Settings.
  3. Scroll down the page until you find the Team secrets section.
  4. Click on the New secret button.
  5. A modal will appear. Provide a name for your secret in the Secret name field.
  6. Add your secret into the Secret field.
  7. (Optional) Repeat steps 5 and 6 to create another secret (such as an access token) if your webhook requires additional secret keys or tokens to authenticate your webhook.

Specify the secrets you want to use for your webhook automation when you configure the webhook. See the Configure a webhook section for more information.

Configure a webhook

Before you can use a webhook, you will first need to configure that webhook in the W&B App UI.

  1. Navigate to the W&B App UI.
  2. Click on Team Settings.
  3. Scroll down the page until you find the Webhooks section.
  4. Click on the New webhook button.
  5. Provide a name for your webhook in the Name field.
  6. Provide the endpoint URL for the webhook in the URL field.
  7. (Optional) From the Secret dropdown menu, select the secret you want to use to authenticate the webhook payload.
  8. (Optional) From the Access token dropdown menu, select the access token you want to use to authorize the sender.
  9. (Optional) From the Access token dropdown menu select additional secret keys or tokens required to authenticate a webhook (such as an access token).

Add a webhook

Once you have a webhook configured and (optionally) a secret, navigate to your project workspace. Click on the Automations tab on the left sidebar.

  1. From the Event type dropdown, select an event type.
  2. If you selected A new version of an artifact is created in a collection event, provide the name of the artifact collection that the automation should respond to from the Artifact collection dropdown.
  3. Select Webhooks from the Action type dropdown.
  4. Click on the Next step button.
  5. Select a webhook from the Webhook dropdown.
  6. (Optional) Provide a payload in the JSON expression editor. See the Example payload section for common use case examples.
  7. Click on Next step.
  8. Provide a name for your webhook automation in the Automation name field.
  9. (Optional) Provide a description for your webhook.
  10. Click on the Create automation button.

Example payloads

The following tabs demonstrate example payloads based on common use cases. Within the examples they reference the following keys to refer to condition objects in the payload parameters:

  • ${event_type} Refers to the type of event that triggered the action.
  • ${event_author} Refers to the user that triggered the action.
  • ${artifact_version} Refers to the specific artifact version that triggered the action. Passed as an artifact instance.
  • ${artifact_version_string} Refers to the specific artifact version that triggered the action. Passed as a string.
  • ${artifact_collection_name} Refers to the name of the artifact collection that the artifact version is linked to.
  • ${project_name} Refers to the name of the project owning the mutation that triggered the action.
  • ${entity_name} Refers to the name of the entity owning the mutation that triggered the action.

Send a repository dispatch from W&B to trigger a GitHub action. For example, suppose you have workflow that accepts a repository dispatch as a trigger for the on key:

on:
  repository_dispatch:
    types: BUILD_AND_DEPLOY

The payload for the repository might look something like:

{
  "event_type": "BUILD_AND_DEPLOY",
  "client_payload": 
  {
    "event_author": "${event_author}",
    "artifact_version": "${artifact_version}",
    "artifact_version_string": "${artifact_version_string}",
    "artifact_collection_name": "${artifact_collection_name}",
    "project_name": "${project_name}",
    "entity_name": "${entity_name}"
    }
}

The contents and positioning of rendered template strings depends on the event or model version the automation is configured for. ${event_type} will render as either LINK_ARTIFACT or ADD_ARTIFACT_ALIAS. See below for an example mapping:

${event_type} --> "LINK_ARTIFACT" or "ADD_ARTIFACT_ALIAS"
${event_author} --> "<wandb-user>"
${artifact_version} --> "wandb-artifact://_id/QXJ0aWZhY3Q6NTE3ODg5ODg3""
${artifact_version_string} --> "<entity>/<project_name>/<artifact_name>:<alias>"
${artifact_collection_name} --> "<artifact_collection_name>"
${project_name} --> "<project_name>"
${entity_name} --> "<entity>"

Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume W&B artifacts through the W&B API.

For more information about repository dispatch, see the official documentation on the GitHub Marketplace.

Configure an ‘Incoming Webhook’ to get the webhook URL for your Teams Channel by configuring. The following is an example payload:

{
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"summary": "New Notification",
"sections": [
  {
    "activityTitle": "Notification from WANDB",
    "text": "This is an example message sent via Teams webhook.",
    "facts": [
      {
        "name": "Author",
        "value": "${event_author}"
      },
      {
        "name": "Event Type",
        "value": "${event_type}"
      }
    ],
    "markdown": true
  }
]
}

You can use template strings to inject W&B data into your payload at the time of execution (as shown in the Teams example above).

Setup your Slack app and add an incoming webhook integration with the instructions highlighted in the Slack API documentation. Ensure that you have the secret specified under Bot User OAuth Token as your W&B webhook’s access token.

The following is an example payload:

  {
      "text": "New alert from WANDB!",
  "blocks": [
      {
              "type": "section",
          "text": {
              "type": "mrkdwn",
              "text": "Artifact event: ${event_type}"
          }
      },
          {
              "type":"section",
              "text": {
              "type": "mrkdwn",
              "text": "New version: ${artifact_version_string}"
          }
          },
          {
          "type": "divider"
      },
          {
              "type": "section",
          "text": {
              "type": "mrkdwn",
              "text": "Author: ${event_author}"
          }
          }
      ]
  }

Troubleshoot your webhook

Interactively troubleshoot your webhook with the W&B App UI or programmatically with a Bash script. You can troubleshoot a webhook when you create a new webhook or edit an existing webhook.

Interactively test a webhook with the W&B App UI.

  1. Navigate to your W&B Team Settings page.
  2. Scroll to the Webhooks section.
  3. Click on the horizontal three docs (meatball icon) next to the name of your webhook.
  4. Select Test.
  5. From the UI panel that appears, paste your POST request to the field that appears.
  6. Click on Test webhook.

Within the W&B App UI, W&B posts the response made by your endpoint.

See Testing Webhooks in Weights & Biases YouTube video to view a real-world example.

The following bash script generates a POST request similar to the POST request W&B sends to your webhook automation when it is triggered.

Copy and paste the code below into a shell script to troubleshoot your webhook. Specify your own values for the following:

  • ACCESS_TOKEN
  • SECRET
  • PAYLOAD
  • API_ENDPOINT
webhook_test.sh

View an automation

View automations associated to an artifact from the W&B App UI.

  1. Navigate to your project workspace on the W&B App.
  2. Click on the Automations tab on the left sidebar.

Within the Automations section you can find the following properties for each automations that was created in your project"

  • Trigger type: The type of trigger that was configured.
  • Action type: The action type that triggers the automation.
  • Action name: The action name you provided when you created the automation.
  • Queue: The name of the queue the job was enqueued to. This field is left empty if you selected a webhook action type.

Delete an automation

Delete an automation associated with a artifact. Actions in progress are not affected if you delete that automation before the action completes.

  1. Navigate to your project workspace on the W&B App.
  2. Click on the Automations tab on the left sidebar.
  3. From the list, select the name of the automation you want to view.
  4. Hover your mouse next to the name of the automation and click on the kebob (three vertical dots) menu.
  5. Select Delete.

5 - W&B App UI Reference

5.1 - Panels

Use workspace panel visualizations to explore your logged data by key, visualize the relationships between hyperparameters and output metrics, and more.

Workspace modes

W&B projects support two different workspace modes. The icon next to the workspace name shows its mode.

Icon Workspace mode
automated workspace icon Automated workspaces automatically generate panels for all keys logged in the project. This can help you get started by visualizing all available data for the project.
manual workspace icon Manual workspaces start as blank slates and display only those panels intentionally added by users. Choose a manual workspace when you care mainly about a fraction of the keys logged in the project, or for a more focused analysis.

To change how a workspace generates panels, reset the workspace.

Reset a workspace

To reset a workspace:

  1. At the top of the workspace, click the action menu ....
  2. Click Reset workspace.

Add panels

You can add panels to your workspace, either globally or at the section level.

To add a panel:

  1. To add a panel globally, click Add panels in the control bar near the panel search field.

  2. To add a panel directly to a section instead, click the section’s action ... menu, then click + Add panels.

  3. Select the type of panel to add.

Quick add

Quick Add allows you to select a key in the project from a list to generate a standard panel for it.

For an automated workspace with no deleted panels, Quick add is not available. You can use Quick add to re-add a panel that you deleted.

Custom panel add

To add a custom panel to your workspace:

  1. Select the type of panel you’d like to create.
  2. Follow the prompts to configure the panel.

To learn more about the options for each type of panel, refer to the relevant section below, such as Line plots or Bar plots.

Manage panels

Edit a panel

To edit a panel:

  1. Click its pencil icon.
  2. Modify the panel’s settings.
  3. To change the panel to a different type, select the type and then configure the settings.
  4. Click Apply.

Move a panel

To move a panel to a different section, you can use the drag handle on the panel. To select the new section from a list instead:

  1. If necessary, create a new section by clicking Add section after the last section.
  2. Click the action ... menu for the panel.
  3. Click Move, then select a new section.

You can also use the drag handle to rearrange panels within a section.

Share a full-screen panel directly

Direct colleagues to a specific panel in your project. The link redirects users to a full screen view of that panel when they click that link. To create a link to a panel:

  1. Hover your mouse over the panel.
  2. Select the action ... menu.
  3. Click Copy panel URL.

The settings of the project determine who can view the panel. This means that if the project is private, only members of the project can view the panel. If the project is public, anyone with the link can view the panel.

If multiple panels have the same name, W&B shares the first panel with the name.

Duplicate a panel

To duplicate a panel:

  1. At the top of the panel, click the action ... menu.
  2. Click Duplicate.

If desired, you can customize or move the duplicated panel.

Remove panels

To remove a panel:

  1. Hover your mouse over the panel.
  2. Select the action ... menu.
  3. Click Delete.

To remove all panels from a manual workspace, click its action ... menu, then click Clear all panels.

To remove all panels from an automatic or manual workspace, you can reset the workspace. Select Automatic to start with the default set of panels, or select Manual to start with an empty workspace with no panels.

Manage sections

By default, sections in a workspace reflect the logging hierarchy of your keys. However, in a manual workspace, sections appear only after you start adding panels.

Add a section

To add a section, click Add section after the last section.

To add a new section before or after an existing section, you can instead click the section’s action ... menu, then click New section below or New section above.

Rename a section

To rename a section, click its action ... menu, then click Rename section.

Delete a section

To delete a section, click its ... menu, then click Delete section. This removes the section and its panels.

5.1.1 - Line plots

Visualize metrics, customize axes, and compare multiple lines on a plot

Line plots show up by default when you plot metrics over time with wandb.log(). Customize with chart settings to compare multiple lines on the same plot, calculate custom axes, and rename labels.

Edit line panel settings

Hover your mouse over the panel you want to edit its settings for. Select the pencil icon that appears. Within the modal that appears, select a tab to edit the Data, Grouping, Chart, Expressions, or Legend settings.

Data

Select the Data tab to edit the x-axis, y-axis, smoothing filter, point aggregation and more. The proceeding describes some of the options you can edit:

  • X axis: By default the x-axis is set to Step. You can change the x-axis to Relative Time, or select a custom axis based on values you log with W&B.
    • Relative Time (Wall) is clock time since the process started, so if you started a run and resumed it a day later and logged something that would be plotted a 24hrs.
    • Relative Time (Process) is time inside the running process, so if you started a run and ran for 10 seconds and resumed a day later that point would be plotted at 10s
    • Wall Time is minutes elapsed since the start of the first run on the graph
    • Step increments by default each time wandb.log() is called, and is supposed to reflect the number of training steps you’ve logged from your model
  • Y axes: Select y-axes from the logged values, including metrics and hyperparameters that change over time.
  • Min, max, and log scale: Minimum, maximum, and log scale settings for x axis and y axis in line plots
  • Smoothing: Change the smoothing on the line plot.
  • Outliers: Rescale to exclude outliers from the default plot min and max scale
  • Max runs to show: Show more lines on the line plot at once by increasing this number, which defaults to 10 runs. You’ll see the message “Showing first 10 runs” on the top of the chart if there are more than 10 runs available but the chart is constraining the number visible.
  • Chart type: Change between a line plot, an area plot, and a percentage area plot

Grouping

Select the Grouping tab to use group by methods to organize your data.

  • Group by: Select a column, and all the runs with the same value in that column will be grouped together.
  • Agg: Aggregation— the value of the line on the graph. The options are mean, median, min, and max of the group.

Chart

Select the Chart tab to edit the plot’s title, axis titles, legend, and more.

  • Title: Add a custom title for line plot, which shows up at the top of the chart
  • X-Axis title: Add a custom title for the x-axis of the line plot, which shows up in the lower right corner of the chart.
  • Y-Axis title: Add a custom title for the y-axis of the line plot, which shows up in the upper left corner of the chart.
  • Show legend: Toggle legend on or off
  • Font size: Change the font size of the chart title, x-axis title, and y-axis title
  • Legend position: Change the position of the legend on the chart

Legend

  • Legend: Select field that you want to see in the legend of the plot for each line. You could, for example, show the name of the run and the learning rate.
  • Legend template: Fully customizable, this powerful template allows you to specify exactly what text and variables you want to show up in the template at the top of the line plot as well as the legend that appears when you hover your mouse over the plot.

Expressions

  • Y Axis Expressions: Add calculated metrics to your graph. You can use any of the logged metrics as well as configuration values like hyperparameters to calculate custom lines.
  • X Axis Expressions: Rescale the x-axis to use calculated values using custom expressions. Useful variables include**_step** for the default x-axis, and the syntax for referencing summary values is ${summary:value}

Visualize average values on a plot

If you have several different experiments and you’d like to see the average of their values on a plot, you can use the Grouping feature in the table. Click “Group” above the run table and select “All” to show averaged values in your graphs.

Here is what the graph looks like before averaging:

The proceeding image shows a graph that represents average values across runs using grouped lines.

Visualize NaN value on a plot

You can also plot NaN values including PyTorch tensors on a line plot with wandb.log. For example:

wandb.log({"test": [..., float("nan"), ...]})

Compare two metrics on one chart

  1. Select the Add panels button in the top right corner of the page.
  2. From the left panel that appears, expand the Evaluation dropdown.
  3. Select Run comparer

Change the color of the line plots

Sometimes the default color of runs is not helpful for comparison. To help overcome this, wandb provides two instances with which one can manually change the colors.

Each run is given a random color by default upon initialization.

Random colors given to runs

Upon clicking any of the colors, a color palette appears from which we can manually choose the color we want.

The color palette
  1. Hover your mouse over the panel you want to edit its settings for.
  2. Select the pencil icon that appears.
  3. Choose the Legend tab.

Visualize on different x axes

If you’d like to see the absolute time that an experiment has taken, or see what day an experiment ran, you can switch the x axis. Here’s an example of switching from steps to relative time and then to wall time.

Area plots

In the line plot settings, in the advanced tab, click on different plot styles to get an area plot or a percentage area plot.

Zoom

Click and drag a rectangle to zoom vertically and horizontally at the same time. This changes the x-axis and y-axis zoom.

Hide chart legend

Turn off the legend in the line plot with this simple toggle:

5.1.1.1 - Line plot reference

X-Axis

Selecting X-Axis

You can set the x-axis of a line plot to any value that you have logged with W&B.log as long as it’s always logged as a number.

Y-Axis variables

You can set the y-axis variables to any value you have logged with wandb.log as long as you were logging numbers, arrays of numbers or a histogram of numbers. If you logged more than 1500 points for a variable, W&B samples down to 1500 points.

X range and Y range

You can change the maximum and minimum values of X and Y for the plot.

X range default is from the smallest value of your x-axis to the largest.

Y range default is from the smallest value of your metrics and zero to the largest value of your metrics.

Max runs/groups

By default you will only plot 10 runs or groups of runs. The runs will be taken from the top of your runs table or run set, so if you sort your runs table or run set you can change the runs that are shown.

Legend

You can control the legend of your chart to show for any run any config value that you logged and meta data from the runs such as the created at time or the user who created the run.

Example:

${run:displayName} - ${config:dropout} will make the legend name for each run something like royal-sweep - 0.5 where royal-sweep is the run name and 0.5 is the config parameter named dropout.

You can set value inside[[ ]] to display point specific values in the crosshair when hovering over a chart. For example \[\[ $x: $y ($original) ]] would display something like “2: 3 (2.9)”

Supported values inside [[ ]] are as follows:

Value Meaning
${x} X value
${y} Y value (Including smoothing adjustment)
${original} Y value not including smoothing adjustment
${mean} Mean of grouped runs
${stddev} Standard Deviation of grouped runs
${min} Min of grouped runs
${max} Max of grouped runs
${percent} Percent of total (for stacked area charts)

Grouping

You can aggregate all of the runs by turning on grouping, or group over an individual variable. You can also turn on grouping by grouping inside the table and the groups will automatically populate into the graph.

Smoothing

You can set the smoothing coefficient to be between 0 and 1 where 0 is no smoothing and 1 is maximum smoothing.

Ignore outliers

Ignore outliers sets the graph’s Y-axis range from 5% to 95%, rather than from 0% to 100% to make all data visible.

Expression

Expression lets you plot values derived from metrics like 1-accuracy. It currently only works if you are plotting a single metric. You can do simple arithmetic expressions, +, -, *, / and % as well as ** for powers.

Plot style

Select a style for your line plot.

Line plot:

Area plot:

Percentage area plot:

5.1.1.2 - Point aggregation

Use point aggregation methods within your line plots for improved data visualization accuracy and performance. There are two types of point aggregation modes: full fidelity and random sampling. W&B uses full fidelity mode by default.

Full fidelity

When you use full fidelity mode, W&B breaks the x-axis into dynamic buckets based on the number of data points. It then calculates the minimum, maximum, and average values within each bucket while rendering a point aggregation for the line plot.

There are three main advantages to using full fidelity mode for point aggregation:

  • Preserve extreme values and spikes: retain extreme values and spikes in your data
  • Configure how minimum and maximum points render: use the W&B App to interactively decide whether you want to show extreme (min/max) values as a shaded area.
  • Explore your data without losing data fidelity: W&B recalculates x-axis bucket sizes when you zoom into specific data points. This helps ensure that you can explore your data without losing accuracy. Caching is used to store previously computed aggregations to help reduce loading times which is particularly useful if you are navigating through large datasets.

Configure how minimum and maximum points render

Show or hide minimum and maximum values with shaded areas around your line plots.

The proceeding image shows a blue line plot. The light blue shaded area represents the minimum and maximum values for each bucket.

There are three ways to render minimum and maximum values in your line plots:

  • Never: The min/max values are not displayed as a shaded area. Only show the aggregated line across the x-axis bucket.
  • On hover: The shaded area for min/max values appears dynamically when you hover over the chart. This option keeps the view uncluttered while allowing you to inspect ranges interactively.
  • Always: The min/max shaded area is consistently displayed for every bucket in the chart, helping you visualize the full range of values at all times. This can introduce visual noise if there are many runs visualized in the chart.

By default, the minimum and maximum values are not displayed as shaded areas. To view one of the shaded area options, follow these steps:

  1. Navigate to your W&B project
  2. Select on the Workspace icon on the left tab
  3. Select the gear icon on the top right corner of the screen next to the left of the Add panels button.
  4. From the UI slider that appears, select Line plots
  5. Within the Point aggregation section, choose On over or Always from the Show min/max values as a shaded area dropdown menu.
  1. Navigate to your W&B project
  2. Select on the Workspace icon on the left tab
  3. Select the line plot panel you want to enable full fidelity mode for
  4. Within the modal that appears, select On hover or Always from the Show min/max values as a shaded area dropdown menu.

Explore your data without losing data fidelity

Analyze specific regions of the dataset without missing critical points like extreme values or spikes. When you zoom in on a line plot, W&B adjusts the buckets sizes used to calculate the minimum, maximum, and average values within each bucket.

W&B divides the x-axis is dynamically into 1000 buckets by default. For each bucket, W&B calculates the following values:

  • Minimum: The lowest value in that bucket.
  • Maximum: The highest value in that bucket.
  • Average: The mean value of all points in that bucket.

W&B plots values in buckets in a way that preserves full data representation and includes extreme values in every plot. When zoomed in to 1,000 points or fewer, full fidelity mode renders every data point without additional aggregation.

To zoom in on a line plot, follow these steps:

  1. Navigate to your W&B project
  2. Select on the Workspace icon on the left tab
  3. Optionally add a line plot panel to your workspace or navigate to an existing line plot panel.
  4. Click and drag to select a specific region to zoom in on.

Random sampling

Random sampling uses 1500 randomly sampled points to render line plots. Random sampling is useful for performance reasons when you have a large number of data points.

Enable random sampling

By default, W&B uses full fidelity mode. To enable random sampling, follow these steps:

  1. Navigate to your W&B project
  2. Select on the Workspace icon on the left tab
  3. Select the gear icon on the top right corner of the screen next to the left of the Add panels button.
  4. From the UI slider that appears, select Line plots
  5. Choose Random sampling from the Point aggregation section
  1. Navigate to your W&B project
  2. Select on the Workspace icon on the left tab
  3. Select the line plot panel you want to enable random sampling for
  4. Within the modal that appears, select Random sampling from the Point aggregation method section

Access non sampled data

You can access the complete history of metrics logged during a run using the W&B Run API. The following example demonstrates how to retrieve and process the loss values from a specific run:

# Initialize the W&B API
run = api.run("l2k2/examples-numpy-boston/i0wt6xua")

# Retrieve the history of the 'Loss' metric
history = run.scan_history(keys=["Loss"])

# Extract the loss values from the history
losses = [row["Loss"] for row in history]

5.1.1.3 - Smooth line plots

In line plots, use smoothing to see trends in noisy data.

W&B supports three types of smoothing:

See these live in an interactive W&B report.

Exponential Moving Average (Default)

Exponential smoothing is a technique for smoothing time series data by exponentially decaying the weight of previous points. The range is 0 to 1. See Exponential Smoothing for background. There is a de-bias term added so that early values in the time series are not biased towards zero.

The EMA algorithm takes the density of points on the line (the number of y values per unit of range on x-axis) into account. This allows consistent smoothing when displaying multiple lines with different characteristics simultaneously.

Here is sample code for how this works under the hood:

const smoothingWeight = Math.min(Math.sqrt(smoothingParam || 0), 0.999);
let lastY = yValues.length > 0 ? 0 : NaN;
let debiasWeight = 0;

return yValues.map((yPoint, index) => {
  const prevX = index > 0 ? index - 1 : 0;
  // VIEWPORT_SCALE scales the result to the chart's x-axis range
  const changeInX =
    ((xValues[index] - xValues[prevX]) / rangeOfX) * VIEWPORT_SCALE;
  const smoothingWeightAdj = Math.pow(smoothingWeight, changeInX);

  lastY = lastY * smoothingWeightAdj + yPoint;
  debiasWeight = debiasWeight * smoothingWeightAdj + 1;
  return lastY / debiasWeight;
});

Here’s what this looks like in the app:

Gaussian Smoothing

Gaussian smoothing (or gaussian kernel smoothing) computes a weighted average of the points, where the weights correspond to a gaussian distribution with the standard deviation specified as the smoothing parameter. See . The smoothed value is calculated for every input x value.

Gaussian smoothing is a good standard choice for smoothing if you are not concerned with matching TensorBoard’s behavior. Unlike an exponential moving average the point will be smoothed based on points occurring both before and after the value.

Here’s what this looks like in the app:

Running Average

Running average is a smoothing algorithm that replaces a point with the average of points in a window before and after the given x value. See “Boxcar Filter” at https://en.wikipedia.org/wiki/Moving_average. The selected parameter for running average tells Weights and Biases the number of points to consider in the moving average.

Consider using Gaussian Smoothing if your points are spaced unevenly on the x-axis.

The following image demonstrates how a running app looks like in the app:

Exponential Moving Average (Deprecated)

The TensorBoard EMA algorithm has been deprecated as it cannot accurately smooth multiple lines on the same chart that do not have a consistent point density (number of points plotted per unit of x-axis).

Exponential moving average is implemented to match TensorBoard’s smoothing algorithm. The range is 0 to 1. See Exponential Smoothing for background. There is a debias term added so that early values in the time series are not biases towards zero.

Here is sample code for how this works under the hood:

  data.forEach(d => {
    const nextVal = d;
    last = last * smoothingWeight + (1 - smoothingWeight) * nextVal;
    numAccum++;
    debiasWeight = 1.0 - Math.pow(smoothingWeight, numAccum);
    smoothedData.push(last / debiasWeight);

Here’s what this looks like in the app:

Implementation Details

All of the smoothing algorithms run on the sampled data, meaning that if you log more than 1500 points, the smoothing algorithm will run after the points are downloaded from the server. The intention of the smoothing algorithms is to help find patterns in data quickly. If you need exact smoothed values on metrics with a large number of logged points, it may be better to download your metrics through the API and run your own smoothing methods.

Hide original data

By default we show the original, unsmoothed data as a faint line in the background. Click the Show Original toggle to turn this off.

5.1.2 - Bar plots

Visualize metrics, customize axes, and compare categorical data as bars.

A bar plot presents categorical data with rectangular bars which can be plotted vertically or horizontally. Bar plots show up by default with wandb.log() when all logged values are of length one.

Plotting Box and horizontal Bar plots in W&B

Customize with chart settings to limit max runs to show, group runs by any config and rename labels.

Customize bar plots

You can also create Box or Violin Plots to combine many summary statistics into one chart type**.**

  1. Group runs via runs table.
  2. Click ‘Add panel’ in the workspace.
  3. Add a standard ‘Bar Chart’ and select the metric to plot.
  4. Under the ‘Grouping’ tab, pick ‘box plot’ or ‘Violin’, etc. to plot either of these styles.
Customize Bar Plots

5.1.3 - Parallel coordinates

Compare results across machine learning experiments

Parallel coordinates charts summarize the relationship between large numbers of hyperparameters and model metrics at a glance.

  • Axes: Different hyperparameters from wandb.config and metrics from wandb.log.
  • Lines: Each line represents a single run. Mouse over a line to see a tooltip with details about the run. All lines that match the current filters will be shown, but if you turn off the eye, lines will be grayed out.

Panel Settings

Configure these features in the panel settings— click the edit button in the upper right corner of the panel.

  • Tooltip: On hover, a legend shows up with info on each run
  • Titles: Edit the axis titles to be more readable
  • Gradient: Customize the gradient to be any color range you like
  • Log scale: Each axis can be set to view on a log scale independently
  • Flip axis: Switch the axis direction— this is useful when you have both accuracy and loss as columns

See it live →

5.1.4 - Scatter plots

Use the scatter plot to compare multiple runs and visualize how your experiments are performing. We’ve added some customizable features:

  1. Plot a line along the min, max, and average
  2. Custom metadata tooltips
  3. Control point colors
  4. Set axes ranges
  5. Switch axes to log scale

Here’s an example of validation accuracy of different models over a couple of weeks of experimentation. The tooltip is customized to include the batch size and dropout as well as the values on the axes. There’s also a line plotting the running average of validation accuracy.
See a live example →

5.1.5 - Save and diff code

By default, W&B only saves the latest git commit hash. You can turn on more code features to compare the code between your experiments dynamically in the UI.

Starting with wandb version 0.8.28, W&B can save the code from your main training file where you call wandb.init().

Save library code

When you enable code saving, W&B saves the code from the file that called wandb.init(). To save additional library code, you have three options:

Call wandb.run.log_code(".") after calling wandb.init()

import wandb

wandb.init()
wandb.run.log_code(".")

Pass a settings object to wandb.init with code_dir set

import wandb

wandb.init(settings=wandb.Settings(code_dir="."))

This captures all python source code files in the current directory and all subdirectories as an artifact. For more control over the types and locations of source code files that are saved, see the reference docs.

Set code saving in the UI

In addition to setting code saving programmatically, you can also toggle this feature in your W&B account Settings. Note that this will enable code saving for all teams associated with your account.

By default, W&B disables code saving for all teams.

  1. Log in to your W&B account.
  2. Go to Settings > Privacy.
  3. Under Project and content security, toggle Disable default code saving on.

Code comparer

Compare code used in different W&B runs:

  1. Select the Add panels button in the top right corner of the page.
  2. Expand TEXT AND CODE dropdown and select Code.

Jupyter session history

W&B saves the history of code executed in your Jupyter notebook session. When you call wandb.init() inside of Jupyter, W&B adds a hook to automatically save a Jupyter notebook containing the history of code executed in your current session.

  1. Navigate to your project workspaces that contains your code.
  2. Select the Artifacts tab in the left navigation bar.
  3. Expand the code artifact.
  4. Select the Files tab.

This displays the cells that were run in your session along with any outputs created by calling iPython’s display method. This enables you to see exactly what code was run within Jupyter in a given run. When possible W&B also saves the most recent version of the notebook which you would find in the code directory as well.

5.1.6 - Parameter importance

Visualize the relationships between your model’s hyperparameters and output metrics

Discover which of your hyperparameters were the best predictors of, and highly correlated to desirable values of your metrics.

Correlation is the linear correlation between the hyperparameter and the chosen metric (in this case val_loss). So a high correlation means that when the hyperparameter has a higher value, the metric also has higher values and vice versa. Correlation is a great metric to look at but it can’t capture second order interactions between inputs and it can get messy to compare inputs with wildly different ranges.

Therefore W&B also calculates an importance metric. W&B trains a random forest with the hyperparameters as inputs and the metric as the target output and report the feature importance values for the random forest.

The idea for this technique was inspired by a conversation with Jeremy Howard who has pioneered the use of random forest feature importances to explore hyperparameter spaces at Fast.ai. W&B highly recommends you check out this lecture (and these notes) to learn more about the motivation behind this analysis.

Hyperparameter importance panel untangles the complicated interactions between highly correlated hyperparameters. In doing so, it helps you fine tune your hyperparameter searches by showing you which of your hyperparameters matter the most in terms of predicting model performance.

Creating a hyperparameter importance panel

  1. Navigate to your W&B project.
  2. Select Add panels buton.
  3. Expand the CHARTS dropdown, choose Parallel coordinates from the dropdown.
Using automatic parameter visualization

With the parameter manager, we can manually set the visible and hidden parameters.

Manually setting the visible and hidden fields

Interpreting a hyperparameter importance panel

This panel shows you all the parameters passed to the wandb.config object in your training script. Next, it shows the feature importances and correlations of these config parameters with respect to the model metric you select (val_loss in this case).

Importance

The importance column shows you the degree to which each hyperparameter was useful in predicting the chosen metric. Imagine a scenario were you start tuning a plethora of hyperparameters and using this plot to hone in on which ones merit further exploration. The subsequent sweeps can then be limited to the most important hyperparameters, thereby finding a better model faster and cheaper.

In the preceding image, you can see that epochs, learning_rate, batch_size and weight_decay were fairly important.

Correlations

Correlations capture linear relationships between individual hyperparameters and metric values. They answer the question of whether there a significant relationship between using a hyperparameter, such as the SGD optimizer, and the val_loss (the answer in this case is yes). Correlation values range from -1 to 1, where positive values represent positive linear correlation, negative values represent negative linear correlation and a value of 0 represents no correlation. Generally a value greater than 0.7 in either direction represents strong correlation.

You might use this graph to further explore the values that are have a higher correlation to our metric (in this case you might pick stochastic gradient descent or adam over rmsprop or nadam) or train for more epochs.

The disparities between importance and correlations result from the fact that importance accounts for interactions between hyperparameters, whereas correlation only measures the affects of individual hyperparameters on metric values. Secondly, correlations capture only the linear relationships, whereas importances can capture more complex ones.

As you can see both importance and correlations are powerful tools for understanding how your hyperparameters influence model performance.

5.1.7 - Compare run metrics

Compare metrics across multiple runs

Use the Run Comparer to see what metrics are different across your runs.

  1. Select the Add panels button in the top right corner of the page.
  2. From the left panel that appears, expand the Evaluation dropdown.
  3. Select Run comparer

Toggle the diff only option to hide rows where the values are the same across runs.​​

5.1.8 - Query panels

Some features on this page are in beta, hidden behind a feature flag. Add weave-plot to your bio on your profile page to unlock all related features.

Use query panels to query and interactively visualize your data.

Create a query panel

Add a query to your workspace or within a report.

  1. Navigate to your project’s workspace.
  2. In the upper right hand corner, click Add panel.
  3. From the dropdown, select Query panel.

Type and select /Query panel.

Alternatively, you can associate a query with a set of runs:

  1. Within your report, type and select /Panel grid.
  2. Click the Add panel button.
  3. From the dropdown, select Query panel.

Query components

Expressions

Use query expressions to query your data stored in W&B such as runs, artifacts, models, tables, and more.

Example: Query a table

Suppose you want to query a W&B Table. In your training code you log a table called "cifar10_sample_table":

import wandb
wandb.log({"cifar10_sample_table":<MY_TABLE>})

Within the query panel you can query your table with:

runs.summary["cifar10_sample_table"]

Breaking this down:

  • runs is a variable automatically injected in Query Panel Expressions when the Query Panel is in a Workspace. Its “value” is the list of runs which are visible for that particular Workspace. Read about the different attributes available within a run here.
  • summary is an op which returns the Summary object for a Run. Ops are mapped, meaning this op is applied to each Run in the list, resulting in a list of Summary objects.
  • ["cifar10_sample_table"] is a Pick op (denoted with brackets), with a parameter of predictions. Since Summary objects act like dictionaries or maps, this operation picks the predictions field off of each Summary object.

To learn how to write your own queries interactively, see this report.

Configurations

Select the gear icon on the upper left corner of the panel to expand the query configuration. This allows the user to configure the type of panel and the parameters for the result panel.

Result panels

Finally, the query result panel renders the result of the query expression, using the selected query panel, configured by the configuration to display the data in an interactive form. The following images shows a Table and a Plot of the same data.

Basic operations

The following common operations you can make within your query panels.

Sort

Sort from the column options:

Filter

You can either filter directly in the query or using the filter button in the top left corner (second image)

Map

Map operations iterate over lists and apply a function to each element in the data. You can do this directly with a panel query or by inserting a new column from the column options.

Groupby

You can groupby using a query or from the column options.

Concat

The concat operation allows you to concatenate 2 tables and concatenate or join from the panel settings

Join

It is also possible to join tables directly in the query. Consider the following query expression:

project("luis_team_test", "weave_example_queries").runs.summary["short_table_0"].table.rows.concat.join(\
project("luis_team_test", "weave_example_queries").runs.summary["short_table_1"].table.rows.concat,\
(row) => row["Label"],(row) => row["Label"], "Table1", "Table2",\
"false", "false")

The table on the left is generated from:

project("luis_team_test", "weave_example_queries").\
runs.summary["short_table_0"].table.rows.concat.join

The table in the right is generated from:

project("luis_team_test", "weave_example_queries").\
runs.summary["short_table_1"].table.rows.concat

Where:

  • (row) => row["Label"] are selectors for each table, determining which column to join on
  • "Table1" and "Table2" are the names of each table when joined
  • true and false are for left and right inner/outer join settings

Runs object

Use query panels to access the runs object. Run objects store records of your experiments. You can find more details about it in this section of the report but, as quick overview, runs object has available:

  • summary: A dictionary of information that summarizes the run’s results. This can be scalars like accuracy and loss, or large files. By default, wandb.log() sets the summary to the final value of a logged time series. You can set the contents of the summary directly. Think of the summary as the run’s outputs.
  • history: A list of dictionaries meant to store values that change while the model is training such as loss. The command wandb.log() appends to this object.
  • config: A dictionary of the run’s configuration information, such as the hyperparameters for a training run or the preprocessing methods for a run that creates a dataset Artifact. Think of these as the run’s “inputs”

Access Artifacts

Artifacts are a core concept in W&B. They are a versioned, named collection of files and directories. Use Artifacts to track model weights, datasets, and any other file or directory. Artifacts are stored in W&B and can be downloaded or used in other runs. You can find more details and examples in this section of the report. Artifacts are normally accessed from the project object:

  • project.artifactVersion(): returns the specific artifact version for a given name and version within a project
  • project.artifact(""): returns the artifact for a given name within a project. You can then use .versions to get a list of all versions of this artifact
  • project.artifactType(): returns the artifactType for a given name within a project. You can then use .artifacts to get a list of all artifacts with this type
  • project.artifactTypes: returns a list of all artifact types under the project

5.1.8.1 - Embed objects

W&B’s Embedding Projector allows users to plot multi-dimensional embeddings on a 2D plane using common dimension reduction algorithms like PCA, UMAP, and t-SNE.

Embeddings are used to represent objects (people, images, posts, words, etc…) with a list of numbers - sometimes referred to as a vector. In machine learning and data science use cases, embeddings can be generated using a variety of approaches across a range of applications. This page assumes the reader is familiar with embeddings and is interested in visually analyzing them inside of W&B.

Embedding Examples

Hello World

W&B allows you to log embeddings using the wandb.Table class. Consider the following example of 3 embeddings, each consisting of 5 dimensions:

import wandb

wandb.init(project="embedding_tutorial")
embeddings = [
    # D1   D2   D3   D4   D5
    [0.2, 0.4, 0.1, 0.7, 0.5],  # embedding 1
    [0.3, 0.1, 0.9, 0.2, 0.7],  # embedding 2
    [0.4, 0.5, 0.2, 0.2, 0.1],  # embedding 3
]
wandb.log(
    {"embeddings": wandb.Table(columns=["D1", "D2", "D3", "D4", "D5"], data=embeddings)}
)
wandb.finish()

After running the above code, the W&B dashboard will have a new Table containing your data. You can select 2D Projection from the upper right panel selector to plot the embeddings in 2 dimensions. Smart default will be automatically selected, which can be easily overridden in the configuration menu accessed by clicking the gear icon. In this example, we automatically use all 5 available numeric dimensions.

Digits MNIST

While the above example shows the basic mechanics of logging embeddings, typically you are working with many more dimensions and samples. Let’s consider the MNIST Digits dataset (UCI ML hand-written digits datasets) made available via SciKit-Learn. This dataset has 1797 records, each with 64 dimensions. The problem is a 10 class classification use case. We can convert the input data to an image for visualization as well.

import wandb
from sklearn.datasets import load_digits

wandb.init(project="embedding_tutorial")

# Load the dataset
ds = load_digits(as_frame=True)
df = ds.data

# Create a "target" column
df["target"] = ds.target.astype(str)
cols = df.columns.tolist()
df = df[cols[-1:] + cols[:-1]]

# Create an "image" column
df["image"] = df.apply(
    lambda row: wandb.Image(row[1:].values.reshape(8, 8) / 16.0), axis=1
)
cols = df.columns.tolist()
df = df[cols[-1:] + cols[:-1]]

wandb.log({"digits": df})
wandb.finish()

After running the above code, again we are presented with a Table in the UI. By selecting 2D Projection we can configure the definition of the embedding, coloring, algorithm (PCA, UMAP, t-SNE), algorithm parameters, and even overlay (in this case we show the image when hovering over a point). In this particular case, these are all “smart defaults” and you should see something very similar with a single click on 2D Projection. (Click here to interact with this example).

Logging Options

You can log embeddings in a number of different formats:

  1. Single Embedding Column: Often your data is already in a “matrix”-like format. In this case, you can create a single embedding column - where the data type of the cell values can be list[int], list[float], or np.ndarray.
  2. Multiple Numeric Columns: In the above two examples, we use this approach and create a column for each dimension. We currently accept python int or float for the cells.

Single Embedding Column Many Numeric Columns

Furthermore, just like all tables, you have many options regarding how to construct the table:

  1. Directly from a dataframe using wandb.Table(dataframe=df)
  2. Directly from a list of data using wandb.Table(data=[...], columns=[...])
  3. Build the table incrementally row-by-row (great if you have a loop in your code). Add rows to your table using table.add_data(...)
  4. Add an embedding column to your table (great if you have a list of predictions in the form of embeddings): table.add_col("col_name", ...)
  5. Add a computed column (great if you have a function or model you want to map over your table): table.add_computed_columns(lambda row, ndx: {"embedding": model.predict(row)})

Plotting Options

After selecting 2D Projection, you can click the gear icon to edit the rendering settings. In addition to selecting the intended columns (see above), you can select an algorithm of interest (along with the desired parameters). Below you can see the parameters for UMAP and t-SNE respectively.

5.2 - Custom charts

Use Custom Charts to create charts that aren’t possible right now in the default UI. Log arbitrary tables of data and visualize them exactly how you want. Control details of fonts, colors, and tooltips with the power of Vega.

Supported charts from vega.github.io/vega

How it works

  1. Log data: From your script, log config and summary data as you normally would when running with W&B. To visualize a list of multiple values logged at one specific time, use a customwandb.Table
  2. Customize the chart: Pull in any of this logged data with a GraphQL query. Visualize the results of your query with Vega, a powerful visualization grammar.
  3. Log the chart: Call your own preset from your script with wandb.plot_table().

Log charts from a script

Builtin presets

These presets have builtin wandb.plot methods that make it fast to log charts directly from your script and see the exact visualizations you’re looking for in the UI.

wandb.plot.line()

Log a custom line plot—a list of connected and ordered points (x,y) on arbitrary axes x and y.

data = [[x, y] for (x, y) in zip(x_values, y_values)]
table = wandb.Table(data=data, columns=["x", "y"])
wandb.log(
    {
        "my_custom_plot_id": wandb.plot.line(
            table, "x", "y", title="Custom Y vs X Line Plot"
        )
    }
)

You can use this to log curves on any two dimensions. Note that if you’re plotting two lists of values against each other, the number of values in the lists must match exactly (for example, each point must have an x and a y).

See in the app

Run the code

wandb.plot.scatter()

Log a custom scatter plot—a list of points (x, y) on a pair of arbitrary axes x and y.

data = [[x, y] for (x, y) in zip(class_x_prediction_scores, class_y_prediction_scores)]
table = wandb.Table(data=data, columns=["class_x", "class_y"])
wandb.log({"my_custom_id": wandb.plot.scatter(table, "class_x", "class_y")})

You can use this to log scatter points on any two dimensions. Note that if you’re plotting two lists of values against each other, the number of values in the lists must match exactly (for example, each point must have an x and a y).

See in the app

Run the code

wandb.plot.bar()

Log a custom bar chart—a list of labeled values as bars—natively in a few lines:

data = [[label, val] for (label, val) in zip(labels, values)]
table = wandb.Table(data=data, columns=["label", "value"])
wandb.log(
    {
        "my_bar_chart_id": wandb.plot.bar(
            table, "label", "value", title="Custom Bar Chart"
        )
    }
)

You can use this to log arbitrary bar charts. Note that the number of labels and values in the lists must match exactly (for example, each data point must have both).

See in the app

Run the code

wandb.plot.histogram()

Log a custom histogram—sort list of values into bins by count/frequency of occurrence—natively in a few lines. Let’s say I have a list of prediction confidence scores (scores) and want to visualize their distribution:

data = [[s] for s in scores]
table = wandb.Table(data=data, columns=["scores"])
wandb.log({"my_histogram": wandb.plot.histogram(table, "scores", title=None)})

You can use this to log arbitrary histograms. Note that data is a list of lists, intended to support a 2D array of rows and columns.

See in the app

Run the code

wandb.plot.pr_curve()

Create a Precision-Recall curve in one line:

plot = wandb.plot.pr_curve(ground_truth, predictions, labels=None, classes_to_plot=None)

wandb.log({"pr": plot})

You can log this whenever your code has access to:

  • a model’s predicted scores (predictions) on a set of examples
  • the corresponding ground truth labels (ground_truth) for those examples
  • (optionally) a list of the labels/class names (labels=["cat", "dog", "bird"...] if label index 0 means cat, 1 = dog, 2 = bird, etc.)
  • (optionally) a subset (still in list format) of the labels to visualize in the plot

See in the app

Run the code

wandb.plot.roc_curve()

Create an ROC curve in one line:

plot = wandb.plot.roc_curve(
    ground_truth, predictions, labels=None, classes_to_plot=None
)

wandb.log({"roc": plot})

You can log this whenever your code has access to:

  • a model’s predicted scores (predictions) on a set of examples
  • the corresponding ground truth labels (ground_truth) for those examples
  • (optionally) a list of the labels/ class names (labels=["cat", "dog", "bird"...] if label index 0 means cat, 1 = dog, 2 = bird, etc.)
  • (optionally) a subset (still in list format) of these labels to visualize on the plot

See in the app

Run the code

Custom presets

Tweak a builtin preset, or create a new preset, then save the chart. Use the chart ID to log data to that custom preset directly from your script.

# Create a table with the columns to plot
table = wandb.Table(data=data, columns=["step", "height"])

# Map from the table's columns to the chart's fields
fields = {"x": "step", "value": "height"}

# Use the table to populate the new custom chart preset
# To use your own saved chart preset, change the vega_spec_name
my_custom_chart = wandb.plot_table(
    vega_spec_name="carey/new_chart",
    data_table=table,
    fields=fields,
)

Run the code

Log data

Here are the data types you can log from your script and use in a custom chart:

  • Config: Initial settings of your experiment (your independent variables). This includes any named fields you’ve logged as keys to wandb.config at the start of your training. For example: wandb.config.learning_rate = 0.0001
  • Summary: Single values logged during training (your results or dependent variables). For example, wandb.log({"val_acc" : 0.8}). If you write to this key multiple times during training via wandb.log(), the summary is set to the final value of that key.
  • History: The full time series of the logged scalar is available to the query via the history field
  • summaryTable: If you need to log a list of multiple values, use a wandb.Table() to save that data, then query it in your custom panel.
  • historyTable: If you need to see the history data, then query historyTable in your custom chart panel. Each time you call wandb.Table() or log a custom chart, you’re creating a new table in history for that step.

How to log a custom table

Use wandb.Table() to log your data as a 2D array. Typically each row of this table represents one data point, and each column denotes the relevant fields/dimensions for each data point which you’d like to plot. As you configure a custom panel, the whole table will be accessible via the named key passed to wandb.log()(custom_data_table below), and the individual fields will be accessible via the column names (x, y, and z). You can log tables at multiple time steps throughout your experiment. The maximum size of each table is 10,000 rows.

Try it in a Google Colab

# Logging a custom table of data
my_custom_data = [[x1, y1, z1], [x2, y2, z2]]
wandb.log(
    {"custom_data_table": wandb.Table(data=my_custom_data, columns=["x", "y", "z"])}
)

Customize the chart

Add a new custom chart to get started, then edit the query to select data from your visible runs. The query uses GraphQL to fetch data from the config, summary, and history fields in your runs.

Add a new custom chart, then edit the query

Custom visualizations

Select a Chart in the upper right corner to start with a default preset. Next, pick Chart fields to map the data you’re pulling in from the query to the corresponding fields in your chart. Here’s an example of selecting a metric to get from the query, then mapping that into the bar chart fields below.

Creating a custom bar chart showing accuracy across runs in a project

How to edit Vega

Click Edit at the top of the panel to go into Vega edit mode. Here you can define a Vega specification that creates an interactive chart in the UI. You can change any aspect of the chart. For example, you can change the title, pick a different color scheme, show curves as a series of points instead of as connected lines. You can also make changes to the data itself, such as using a Vega transform to bin an array of values into a histogram. The panel preview will update interactively, so you can see the effect of your changes as you edit the Vega spec or query. Refer to the Vega documentation and tutorials .

Field references

To pull data into your chart from W&B, add template strings of the form "${field:<field-name>}" anywhere in your Vega spec. This will create a dropdown in the Chart Fields area on the right side, which users can use to select a query result column to map into Vega.

To set a default value for a field, use this syntax: "${field:<field-name>:<placeholder text>}"

Saving chart presets

Apply any changes to a specific visualization panel with the button at the bottom of the modal. Alternatively, you can save the Vega spec to use elsewhere in your project. To save the reusable chart definition, click Save as at the top of the Vega editor and give your preset a name.

Articles and guides

  1. The W&B Machine Learning Visualization IDE
  2. Visualizing NLP Attention Based Models
  3. Visualizing The Effect of Attention on Gradient Flow
  4. Logging arbitrary curves

Frequently asked questions

Coming soon

  • Polling: Auto-refresh of data in the chart
  • Sampling: Dynamically adjust the total number of points loaded into the panel for efficiency

Gotchas

  • Not seeing the data you’re expecting in the query as you’re editing your chart? It might be because the column you’re looking for is not logged in the runs you have selected. Save your chart and go back out to the runs table, and select the runs you’d like to visualize with the eye icon.

Common use cases

  • Customize bar plots with error bars
  • Show model validation metrics which require custom x-y coordinates (like precision-recall curves)
  • Overlay data distributions from two different models/experiments as histograms
  • Show changes in a metric via snapshots at multiple points during training
  • Create a unique visualization not yet available in W&B (and hopefully share it with the world)

5.2.1 - Tutorial: Use custom charts

Tutorial of using the custom charts feature in the W&B UI

Use custom charts to control the data you’re loading in to a panel and its visualization.

1. Log data to W&B

First, log data in your script. Use wandb.config for single points set at the beginning of training, like hyperparameters. Use wandb.log() for multiple points over time, and log custom 2D arrays with wandb.Table(). We recommend logging up to 10,000 data points per logged key.

# Logging a custom table of data
my_custom_data = [[x1, y1, z1], [x2, y2, z2]]
wandb.log(
  {"custom_data_table": wandb.Table(data=my_custom_data, columns=["x", "y", "z"])}
)

Try a quick example notebook to log the data tables, and in the next step we’ll set up custom charts. See what the resulting charts look like in the live report.

2. Create a query

Once you’ve logged data to visualize, go to your project page and click the + button to add a new panel, then select Custom Chart. You can follow along in this workspace.

A new, blank custom chart ready to be configured

Add a query

  1. Click summary and select historyTable to set up a new query pulling data from the run history.
  2. Type in the key where you logged the wandb.Table(). In the code snippet above, it was my_custom_table . In the example notebook, the keys are pr_curve and roc_curve.

Set Vega fields

Now that the query is loading in these columns, they’re available as options to select in the Vega fields dropdown menus:

Pulling in columns from the query results to set Vega fields
  • x-axis: runSets_historyTable_r (recall)
  • y-axis: runSets_historyTable_p (precision)
  • color: runSets_historyTable_c (class label)

3. Customize the chart

Now that looks pretty good, but I’d like to switch from a scatter plot to a line plot. Click Edit to change the Vega spec for this built in chart. Follow along in this workspace.

I updated the Vega spec to customize the visualization:

  • add titles for the plot, legend, x-axis, and y-axis (set “title” for each field)
  • change the value of “mark” from “point” to “line”
  • remove the unused “size” field

To save this as a preset that you can use elsewhere in this project, click Save as at the top of the page. Here’s what the result looks like, along with an ROC curve:

Bonus: Composite Histograms

Histograms can visualize numerical distributions to help us understand larger datasets. Composite histograms show multiple distributions across the same bins, letting us compare two or more metrics across different models or across different classes within our model. For a semantic segmentation model detecting objects in driving scenes, we might compare the effectiveness of optimizing for accuracy versus intersection over union (IOU), or we might want to know how well different models detect cars (large, common regions in the data) versus traffic signs (much smaller, less common regions). In the demo Colab, you can compare the confidence scores for two of the ten classes of living things.

To create your own version of the custom composite histogram panel:

  1. Create a new Custom Chart panel in your Workspace or Report (by adding a “Custom Chart” visualization). Hit the “Edit” button in the top right to modify the Vega spec starting from any built-in panel type.
  2. Replace that built-in Vega spec with my MVP code for a composite histogram in Vega. You can modify the main title, axis titles, input domain, and any other details directly in this Vega spec using Vega syntax (you could change the colors or even add a third histogram :)
  3. Modify the query in the right hand side to load the correct data from your wandb logs. Add the field summaryTable and set the corresponding tableKey to class_scores to fetch the wandb.Table logged by your run. This will let you populate the two histogram bin sets (red_bins and blue_bins) via the dropdown menus with the columns of the wandb.Table logged as class_scores. For my example, I chose the animal class prediction scores for the red bins and plant for the blue bins.
  4. You can keep making changes to the Vega spec and query until you’re happy with the plot you see in the preview rendering. Once you’re done, click Save as in the top and give your custom plot a name so you can reuse it. Then click Apply from panel library to finish your plot.

Here’s what my results look like from a very brief experiment: training on only 1000 examples for one epoch yields a model that’s very confident that most images are not plants and very uncertain about which images might be animals.

5.3 - Manage workspace, section, and panel settings

Within a given workspace page there are three different setting levels: workspaces, sections, and panels. Workspace settings apply to the entire workspace. Section settings apply to all panels within a section. Panel settings apply to individual panels.

Workspace settings

Workspace settings apply to all sections and all panels within those sections. You can edit two types of workspace settings: Workspace layout and Line plots. Workspace layouts determine the structure of the workspace, while Line plots settings control the default settings for line plots in the workspace.

To edit settings that apply to the overall structure of this workspace:

  1. Navigate to your project workspace.
  2. Click the gear icon next to the New report button to view the workspace settings.
  3. Choose Workspace layout to change the workspace’s layout, or choose Line plots to configure default settings for line plots in the workspace.

Workspace layout options

Configure a workspaces layout to define the overall structure of the workspace. This includes sectioning logic and panel organization.

The workspace layout options page shows whether the workspace generates panels automatically or manually. To adjust a workspace’s panel generation mode, refer to Panels.

This table describes each workspace layout option.

Workspace setting Description
Hide empty sections during search Hide sections that do not contain any panels when searching for a panel.
Sort panels alphabetically Sort panels in your workspaces alphabetically.
Section organization Remove all existing sections and panels and repopulate them with new section names. Groups the newly populated sections either by first or last prefix.

Line plots options

Set global defaults and custom rules for line plots in a workspace by modifying the Line plots workspace settings.

You can edit two main settings within Line plots settings: Data and Display preferences. The Data tab contains the following settings:

Line plot setting Description
X axis The scale of the x-axis in line plots. The x-axis is set to Step by default. See the proceeding table for the list of x-axis options.
Range Minimum and maximum settings to display for x axis.
Smoothing Change the smoothing on the line plot. For more information about smoothing, see Smooth line plots.
Outliers Rescale to exclude outliers from the default plot min and max scale.
Point aggregation method Improve data visualization accuracy and performance. See Point aggregation for more information.
Max number of runs or groups Limit the number of runs or groups displayed on the line plot.

In addition to Step, there are other options for the x-axis:

X axis option Description
Relative Time (Wall) Timestamp since the process starts. For example, suppose start a run and resume that run the next day. If you then log something, the recorded point is 24 hours.
Relative Time (Process) Timestamp inside the running process. For example, suppose you start a run and let it continue for 10 seconds. The next day you resume that run. The point is recorded as 10 seconds.
Wall Time Minutes elapsed since the start of the first run on the graph.
Step Increments each time you call wandb.log().

Within the Display preferences tab, you can toggle the proceeding settings:

Display preference Description
Remove legends from all panels Remove the panel’s legend
Display colored run names in tooltips Show the runs as colored text within the tooltip
Only show highlighted run in companion chart tooltip Display only highlighted runs in chart tooltip
Number of runs shown in tooltips Display the number of runs in the tooltip
Display full run names on the primary chart tooltip Display the full name of the run in the chart tooltip

Section settings

Section settings apply to all panels within that section. Within a workspace section you can sort panels, rearrange panels, and rename the section name.

Modify section settings by selecting the three horizontal dots () in the upper right corner of a section.

From the dropdown, you can edit the following settings that apply to the entire section:

Section setting Description
Rename a section Rename the name of the section
Sort panels A-Z Sort panels within a section alphabetically
Rearrange panels Select and drag a panel within a section to manually order your panels

The proceeding animation demonstrates how to rearrange panels within a section:

Panel settings

Customize an individual panel’s settings to compare multiple lines on the same plot, calculate custom axes, rename labels, and more. To edit a panel’s settings:

  1. Hover your mouse over the panel you want to edit.
  2. Select the pencil icon that appears.
  3. Within the modal that appears, you can edit settings related to the panel’s data, display preferences, and more.

For a complete list of settings you can apply to a panel, see Edit line panel settings.

5.4 - Settings

Use the Weights and Biases Settings Page to customize your individual user profile or team settings.

Within your individual user account you can edit: your profile picture, display name, geography location, biography information, emails associated to your account, and manage alerts for runs. You can also use the settings page to link your GitHub repository and delete your account. For more information, see User settings.

Use the team settings page to invite or remove new members to a team, manage alerts for team runs, change privacy settings, and view and manage storage usage. For more information about team settings, see Team settings.

5.4.1 - Manage user settings

Manage your profile information, account defaults, alerts, participation in beta products, GitHub integration, storage usage, account activation, and create teams in your user settings.

Navigate to your user profile page and select your user icon on the top right corner. From the dropdown, choose Settings.

Profile

Within the Profile section you can manage and modify your account name and institution. You can optionally add a biography, location, link to a personal or your institution’s website, and upload a profile image.

Teams

Create a new team in the Team section. To create a new team, select the New team button and provide the following:

  • Team name - the name of your team. The team mane must be unique. Team names can not be changed.
  • Team type - Select either the Work or Academic button.
  • Company/Organization - Provide the name of the team’s company or organization. Choose the dropdown menu to select a company or organization. You can optionally provide a new organization.

Beta features

Within the Beta Features section you can optionally enable fun add-ons and sneak previews of new products in development. Select the toggle switch next to the beta feature you want to enable.

Alerts

Get notified when your runs crash, finish, or set custom alerts with wandb.alert(). Receive notifications either through Email or Slack. Toggle the switch next to the event type you want to receive alerts from.

  • Runs finished: whether a Weights and Biases run successfully finished.
  • Run crashed: notification if a run has failed to finish.

For more information about how to set up and manage alerts, see Send alerts with wandb.alert.

Personal GitHub integration

Connect a personal Github account. To connect a Github account:

  1. Select the Connect Github button. This will redirect you to an open authorization (OAuth) page.
  2. Select the organization to grant access in the Organization access section.
  3. Select Authorize wandb.

Delete your account

Select the Delete Account button to delete your account.

Storage

The Storage section describes the total memory usage the your account has consumed on the Weights and Biases servers. The default storage plan is 100GB. For more information about storage and pricing, see the Pricing page.

5.4.2 - Manage team settings

Manage a team’s members, avatar, alerts, and privacy settings with the Team Settings page.

Team settings

Change your team’s settings, including members, avatar, alerts, privacy, and usage. Only team administrators can view and edit a team’s settings.

Members

The Members section shows a list of all pending invitations and the members that have either accepted the invitation to join the team. Each member listed displays a member’s name, username, email, team role, as well as their access privileges to Models and Weave, which is inherited by from the Organization. There are three standard team roles: Administrator (Admin), Member, and View-only.

See Add and Manage teams for information on how to create a tea, invite users to a team, remove users from a team, and change a user’s role.

Avatar

Set an avatar by navigating to the Avatar section and uploading an image.

  1. Select the Update Avatar to prompt a file dialog to appear.
  2. From the file dialog, choose the image you want to use.

Alerts

Notify your team when runs crash, finish, or set custom alerts. Your team can receive alerts either through email or Slack.

Toggle the switch next to the event type you want to receive alerts from. Weights and Biases provides the following event type options be default:

  • Runs finished: whether a Weights and Biases run successfully finished.
  • Run crashed: if a run has failed to finish.

For more information about how to set up and manage alerts, see Send alerts with wandb.alert.

Privacy

Navigate to the Privacy section to change privacy settings. Only members with Administrative roles can modify privacy settings. Administrator roles can:

  • Force projects in the team to be private.
  • Enable code saving by default.

Usage

The Usage section describes the total memory usage the team has consumed on the Weights and Biases servers. The default storage plan is 100GB. For more information about storage and pricing, see the Pricing page.

Storage

The Storage section describes the cloud storage bucket configuration that is being used for the team’s data. For more information, see Secure Storage Connector or check out our W&B Server docs if you are self-hosting.

5.4.3 - Manage email settings

Manage emails from the Settings page.

Add, delete, manage email types and primary email addresses in your W&B Profile Settings page. Select your profile icon in the upper right corner of the W&B dashboard. From the dropdown, select Settings. Within the Settings page, scroll down to the Emails dashboard:

Manage primary email

The primary email is marked with a 😎 emoji. The primary email is automatically defined with the email you provided when you created a W&B account.

Select the kebab dropdown to change the primary email associated with your Weights And Biases account:

Add emails

Select + Add Email to add an email. This will take you to an Auth0 page. You can enter in the credentials for the new email or connect using single sign-on (SSO).

Delete emails

Select the kebab dropdown and choose Delete Emails to delete an email that is registered to your W&B account

Log in methods

The Log in Methods column displays the log in methods that are associated with your account.

A verification email is sent to your email account when you create a W&B account. Your email account is considered unverified until you verify your email address. Unverified emails are displayed in red.

Attempt to log in with your email address again to retrieve a second verification email if you no longer have the original verification email that was sent to your email account.

Contact support@wandb.com for account log in issues.

5.4.4 - Manage teams

Collaborate with your colleagues, share results, and track all the experiments across your team

Use W&B Teams as a central workspace for your ML team to build better models faster.

  • Track all the experiments your team has tried so you never duplicate work.
  • Save and reproduce previously trained models.
  • Share progress and results with your boss and collaborators.
  • Catch regressions and immediately get alerted when performance drops.
  • Benchmark model performance and compare model versions.

Create a collaborative team

  1. Sign up or log in to your free W&B account.
  2. Click Invite Team in the navigation bar.
  3. Create your team and invite collaborators.

Create a team profile

You can customize your team’s profile page to show an introduction and showcase reports and projects that are visible to the public or team members. Present reports, projects, and external links.

  • Highlight your best research to visitors by showcasing your best public reports
  • Showcase the most active projects to make it easier for teammates to find them
  • Find collaborators by adding external links to your company or research lab’s website and any papers you’ve published

Remove team members

Team admins can open the team settings page and click the delete button next to the departing member’s name. Any runs logged to the team remain after a user leaves.

Manage team roles and permissions

Select a team role when you invite colleagues to join a team. There are following team role options:

  • Admin: Team admins can add and remove other admins or team members. They have permissions to modify all projects and full deletion permissions. This includes, but is not limited to, deleting runs, projects, artifacts, and sweeps.
  • Member: A regular member of the team. An admin invites a team member by email. A team member cannot invite other members. Team members can only delete runs and sweep runs created by that member. Suppose you have two members A and B. Member B moves a Run from team B’s project to a different project owned by Member A. Member A can not delete the Run Member B moved to Member A’s project. Only the member that creates the Run, or the team admin, can delete the run.
  • View-Only (Enterprise-only feature): View-Only members can view assets within the team such as runs, reports, and workspaces. They can follow and comment on reports, but they can not create, edit, or delete project overview, reports, or runs.
  • Custom roles (Enterprise-only feature): Custom roles allow organization admins to compose new roles based on either of the View-Only or Member roles, together with additional permissions to achieve fine-grained access control. Team admins can then assign any of those custom roles to users in their respective teams. Refer to Introducing Custom Roles for W&B Teams for details.
  • Service accounts (Enterprise-only feature): Refer to Use service accounts to automate workflows.

Team settings

Team settings allow you to manage the settings for your team and its members. With these privileges, you can effectively oversee and organize your team within W&B.

Permissions View-Only Team Member Team Admin
Add team members X
Remove team members X
Manage team settings X

Model Registry

The proceeding table lists permissions that apply to all projects across a given team.

Permissions View-Only Team Member Model Registry Admin Team Admin
Add aliases X X X
Add models to the registry X X X
View models in the registry X X X X
Download models X X X X
Add/Remove Registry Admins X X
Add/Remove Protected Aliases X

See the Model Registry chapter for more information about protected aliases.

Reports

Report permissions grant access to create, view, and edit reports. The proceeding table lists permissions that apply to all reports across a given team.

Permissions View-Only Team Member Team Admin
View reports X X X
Create reports X X
Edit reports X (team members can only edit their own reports) X
Delete reports X (team members can only edit their own reports) X

Experiments

The proceeding table lists permissions that apply to all experiments across a given team.

Permissions View-Only Team Member Team Admin
View experiment metadata (includes history metrics, system metrics, files, and logs) X X X
Edit experiment panels and workspaces X X
Log experiments X X
Delete experiments X (team members can only delete experiments they created) X
Stop experiments X (team members can only stop experiments they created) X

Artifacts

The proceeding table lists permissions that apply to all artifacts across a given team.

Permissions View-Only Team Member Team Admin
View artifacts X X X
Create artifacts X X
Delete artifacts X X
Edit metadata X X
Edit aliases X X
Delete aliases X X
Download artifact X X

System settings (W&B Server only)

Use system permissions to create and manage teams and their members and to adjust system settings. These privileges enable you to effectively administer and maintain the W&B instance.

Permissions View-Only Team Member Team Admin System Admin
Configure system settings X
Create/delete teams X

Team service account behavior

  • When you configure a team in your training environment, you can use a service account from that team to log runs in either of private or public projects within that team. Additionally, you can attribute those runs to a user if WANDB_USERNAME or WANDB_USER_EMAIL variable exists in your environment and the referenced user is part of that team.
  • When you do not configure a team in your training environment and use a service account, the runs log to the named project within that service account’s parent team. In this case as well, you can attribute the runs to a user if WANDB_USERNAME or WANDB_USER_EMAIL variable exists in your environment and the referenced user is part of the service account’s parent team.
  • A service account can not log runs to a private project in a team different from its parent team. A service account can log to runs to project only if the project is set to Open project visibility.

Add social badges to your intro

In your Intro, type / and choose Markdown and paste the markdown snippet that renders your badge. Once you convert it to WYSIWYG, you can resize it.

For example, to add a Twitter follow badge, add [Twitter: @weights_biase](https://twitter.com/intent/follow?screen_name=weights_biases replacing weights_biases with your Twitter username.

Twitter: @weights_biases

Team trials

See the pricing page for more information on W&B plans. You can download all your data at any time, either using the dashboard UI or the Export API.

Privacy settings

You can see the privacy settings of all team projects on the team settings page: app.wandb.ai/teams/your-team-name

Advanced configuration

Secure storage connector

The team-level secure storage connector allows teams to use their own cloud storage bucket with W&B. This provides greater data access control and data isolation for teams with highly sensitive data or strict compliance requirements. Refer to Secure Storage Connector for more information.

5.4.5 - Manage storage

Ways to manage W&B data storage.

If you are approaching or exceeding your storage limit, there are multiple paths forward to manage your data. The path that’s best for you will depend on your account type and your current project setup.

Manage storage consumption

W&B offers different methods of optimizing your storage consumption:

Delete data

You can also choose to delete data to remain under your storage limit. There are several ways to do this:

  • Delete data interactively with the app UI.
  • Set a TTL policy on Artifacts so they are automatically deleted.

5.4.6 - System metrics

Metrics automatically logged by wandb

This page provides detailed information about the system metrics that are tracked by the W&B SDK.

CPU

Process CPU Percent (CPU)

Percentage of CPU usage by the process, normalized by the number of available CPUs.

W&B assigns a cpu tag to this metric.

CPU Percent

CPU usage of the system on a per-core basis.

W&B assigns a cpu.{i}.cpu_percent tag to this metric.

Process CPU Threads

The number of threads utilized by the process.

W&B assigns a proc.cpu.threads tag to this metric.

Disk

By default, the usage metrics are collected for the / path. To configure the paths to be monitored, use the following setting:

run = wandb.init(
    settings=wandb.Settings(
        _stats_disk_paths=("/System/Volumes/Data", "/home", "/mnt/data"),
    ),
)

Disk Usage Percent

Represents the total system disk usage in percentage for specified paths.

W&B assigns a disk.{path}.usagePercen tag to this metric.

Disk Usage

Represents the total system disk usage in gigabytes (GB) for specified paths. The paths that are accessible are sampled, and the disk usage (in GB) for each path is appended to the samples.

W&B assigns a disk.{path}.usageGB) tag to this metric.

Disk In

Indicates the total system disk read in megabytes (MB). The initial disk read bytes are recorded when the first sample is taken. Subsequent samples calculate the difference between the current read bytes and the initial value.

W&B assigns a disk.in tag to this metric.

Disk Out

Represents the total system disk write in megabytes (MB). Similar to Disk In, the initial disk write bytes are recorded when the first sample is taken. Subsequent samples calculate the difference between the current write bytes and the initial value.

W&B assigns a disk.out tag to this metric.

Memory

Process Memory RSS

Represents the Memory Resident Set Size (RSS) in megabytes (MB) for the process. RSS is the portion of memory occupied by a process that is held in main memory (RAM).

W&B assigns a proc.memory.rssMB tag to this metric.

Process Memory Percent

Indicates the memory usage of the process as a percentage of the total available memory.

W&B assigns a proc.memory.percent tag to this metric.

Memory Percent

Represents the total system memory usage as a percentage of the total available memory.

W&B assigns a memory tag to this metric.

Memory Available

Indicates the total available system memory in megabytes (MB).

W&B assigns a proc.memory.availableMB tag to this metric.

Network

Network Sent

Represents the total bytes sent over the network. The initial bytes sent are recorded when the metric is first initialized. Subsequent samples calculate the difference between the current bytes sent and the initial value.

W&B assigns a network.sent tag to this metric.

Network Received

Indicates the total bytes received over the network. Similar to Network Sent, the initial bytes received are recorded when the metric is first initialized. Subsequent samples calculate the difference between the current bytes received and the initial value.

W&B assigns a network.recv tag to this metric.

NVIDIA GPU

In addition to the metrics described below, if the process and/or its children use a particular GPU, W&B captures the corresponding metrics as gpu.process.{gpu_index}...

GPU Memory Utilization

Represents the GPU memory utilization in percent for each GPU.

W&B assigns a gpu.{gpu_index}.memory tag to this metric.

GPU Memory Allocated

Indicates the GPU memory allocated as a percentage of the total available memory for each GPU.

W&B assigns a gpu.{gpu_index}.memoryAllocated tag to this metric.

GPU Memory Allocated Bytes

Specifies the GPU memory allocated in bytes for each GPU.

W&B assigns a gpu.{gpu_index}.memoryAllocatedBytes tag to this metric.

GPU Utilization

Reflects the GPU utilization in percent for each GPU.

W&B assigns a gpu.{gpu_index}.gpu tag to this metric.

GPU Temperature

The GPU temperature in Celsius for each GPU.

W&B assigns a gpu.{gpu_index}.temp tag to this metric.

GPU Power Usage Watts

Indicates the GPU power usage in Watts for each GPU.

W&B assigns a gpu.{gpu_index}.powerWatts tag to this metric.

GPU Power Usage Percent

Reflects the GPU power usage as a percentage of its power capacity for each GPU.

W&B assigns a gpu.{gpu_index}.powerPercent tag to this metric.

GPU SM Clock Speed

Represents the clock speed of the Streaming Multiprocessor (SM) on the GPU in MHz. This metric is indicative of the processing speed within the GPU cores responsible for computation tasks.

W&B assigns a gpu.{gpu_index}.smClock tag to this metric.

GPU Memory Clock Speed

Represents the clock speed of the GPU memory in MHz, which influences the rate of data transfer between the GPU memory and processing cores.

W&B assigns a gpu.{gpu_index}.memoryClock tag to this metric.

GPU Graphics Clock Speed

Represents the base clock speed for graphics rendering operations on the GPU, expressed in MHz. This metric often reflects performance during visualization or rendering tasks.

W&B assigns a gpu.{gpu_index}.graphicsClock tag to this metric.

GPU Corrected Memory Errors

Tracks the count of memory errors on the GPU that W&B automatically corrects by error-checking protocols, indicating recoverable hardware issues.

W&B assigns a gpu.{gpu_index}.correctedMemoryErrors tag to this metric.

GPU Uncorrected Memory Errors

Tracks the count of memory errors on the GPU that W&B uncorrected, indicating non-recoverable errors which can impact processing reliability.

W&B assigns a gpu.{gpu_index}.unCorrectedMemoryErrors tag to this metric.

GPU Encoder Utilization

Represents the percentage utilization of the GPU’s video encoder, indicating its load when encoding tasks (for example, video rendering) are running.

W&B assigns a gpu.{gpu_index}.encoderUtilization tag to this metric.

AMD GPU

W&B extracts metrics from the output of the rocm-smi tool supplied by AMD (rocm-smi -a --json).

AMD GPU Utilization

Represents the GPU utilization in percent for each AMD GPU device.

W&B assigns a gpu.{gpu_index}.gpu tag to this metric.

AMD GPU Memory Allocated

Indicates the GPU memory allocated as a percentage of the total available memory for each AMD GPU device.

W&B assigns a gpu.{gpu_index}.memoryAllocated tag to this metric.

AMD GPU Temperature

The GPU temperature in Celsius for each AMD GPU device.

W&B assigns a gpu.{gpu_index}.temp tag to this metric.

AMD GPU Power Usage Watts

The GPU power usage in Watts for each AMD GPU device.

W&B assigns a gpu.{gpu_index}.powerWatts tag to this metric.

AMD GPU Power Usage Percent

Reflects the GPU power usage as a percentage of its power capacity for each AMD GPU device.

W&B assigns a gpu.{gpu_index}.powerPercent to this metric.

Apple ARM Mac GPU

Apple GPU Utilization

Indicates the GPU utilization in percent for Apple GPU devices, specifically on ARM Macs.

W&B assigns a gpu.0.gpu tag to this metric.

Apple GPU Memory Allocated

The GPU memory allocated as a percentage of the total available memory for Apple GPU devices on ARM Macs.

W&B assigns a gpu.0.memoryAllocated tag to this metric.

Apple GPU Temperature

The GPU temperature in Celsius for Apple GPU devices on ARM Macs.

W&B assigns a gpu.0.temp tag to this metric.

Apple GPU Power Usage Watts

The GPU power usage in Watts for Apple GPU devices on ARM Macs.

W&B assigns a gpu.0.powerWatts tag to this metric.

Apple GPU Power Usage Percent

The GPU power usage as a percentage of its power capacity for Apple GPU devices on ARM Macs.

W&B assigns a gpu.0.powerPercent tag to this metric.

Graphcore IPU

Graphcore IPUs (Intelligence Processing Units) are unique hardware accelerators designed specifically for machine intelligence tasks.

IPU Device Metrics

These metrics represent various statistics for a specific IPU device. Each metric has a device ID (device_id) and a metric key (metric_key) to identify it. W&B assigns a ipu.{device_id}.{metric_key} tag to this metric.

Metrics are extracted using the proprietary gcipuinfo library, which interacts with Graphcore’s gcipuinfo binary. The sample method fetches these metrics for each IPU device associated with the process ID (pid). Only the metrics that change over time, or the first time a device’s metrics are fetched, are logged to avoid logging redundant data.

For each metric, the method parse_metric is used to extract the metric’s value from its raw string representation. The metrics are then aggregated across multiple samples using the aggregate method.

The following lists available metrics and their units:

  • Average Board Temperature (average board temp (C)): Temperature of the IPU board in Celsius.
  • Average Die Temperature (average die temp (C)): Temperature of the IPU die in Celsius.
  • Clock Speed (clock (MHz)): The clock speed of the IPU in MHz.
  • IPU Power (ipu power (W)): Power consumption of the IPU in Watts.
  • IPU Utilization (ipu utilisation (%)): Percentage of IPU utilization.
  • IPU Session Utilization (ipu utilisation (session) (%)): IPU utilization percentage specific to the current session.
  • Data Link Speed (speed (GT/s)): Speed of data transmission in Giga-transfers per second.

Google Cloud TPU

Tensor Processing Units (TPUs) are Google’s custom-developed ASICs (Application Specific Integrated Circuits) used to accelerate machine learning workloads.

TPU Memory usage

The current High Bandwidth Memory usage in bytes per TPU core.

W&B assigns a tpu.{tpu_index}.memoryUsageBytes tag to this metric.

TPU Memory usage percentage

The current High Bandwidth Memory usage in percent per TPU core.

W&B assigns a tpu.{tpu_index}.memoryUsageBytes tag to this metric.

TPU Duty cycle

TensorCore duty cycle percentage per TPU device. Tracks the percentage of time over the sample period during which the accelerator TensorCore was actively processing. A larger value means better TensorCore utilization.

W&B assigns a tpu.{tpu_index}.dutyCycle tag to this metric.

AWS Trainium

AWS Trainium is a specialized hardware platform offered by AWS that focuses on accelerating machine learning workloads. The neuron-monitor tool from AWS is used to capture the AWS Trainium metrics.

Trainium Neuron Core Utilization

The utilization percentage of each NeuronCore, reported on a per-core basis.

W&B assigns a trn.{core_index}.neuroncore_utilization tag to this metric.

Trainium Host Memory Usage, Total

The total memory consumption on the host in bytes.

W&B assigns a trn.host_total_memory_usage tag to this metric.

Trainium Neuron Device Total Memory Usage

The total memory usage on the Neuron device in bytes.

W&B assigns a trn.neuron_device_total_memory_usage) tag to this metric.

Trainium Host Memory Usage Breakdown:

The following is a breakdown of memory usage on the host:

  • Application Memory (trn.host_total_memory_usage.application_memory): Memory used by the application.
  • Constants (trn.host_total_memory_usage.constants): Memory used for constants.
  • DMA Buffers (trn.host_total_memory_usage.dma_buffers): Memory used for Direct Memory Access buffers.
  • Tensors (trn.host_total_memory_usage.tensors): Memory used for tensors.

Trainium Neuron Core Memory Usage Breakdown

Detailed memory usage information for each NeuronCore:

  • Constants (trn.{core_index}.neuroncore_memory_usage.constants)
  • Model Code (trn.{core_index}.neuroncore_memory_usage.model_code)
  • Model Shared Scratchpad (trn.{core_index}.neuroncore_memory_usage.model_shared_scratchpad)
  • Runtime Memory (trn.{core_index}.neuroncore_memory_usage.runtime_memory)
  • Tensors (trn.{core_index}.neuroncore_memory_usage.tensors)

OpenMetrics

Capture and log metrics from external endpoints that expose OpenMetrics / Prometheus-compatible data with support for custom regex-based metric filters to be applied to the consumed endpoints.

Refer to this report for a detailed example of how to use this feature in a particular case of monitoring GPU cluster performance with the NVIDIA DCGM-Exporter.

5.4.7 - Anonymous mode

Log and visualize data without a W&B account

Are you publishing code that you want anyone to be able to run easily? Use anonymous mode to let someone run your code, see a W&B dashboard, and visualize results without needing to create a W&B account first.

Allow results to be logged in anonymous mode with:

import wandb

wandb.init(anonymous="allow")

For example, the proceeding code snippet shows how to create and log an artifact with W&B:

import wandb

run = wandb.init(anonymous="allow")

artifact = wandb.Artifact(name="art1", type="foo")
artifact.add_file(local_path="path/to/file")
run.log_artifact(artifact)

run.finish()

Try the example notebook to see how anonymous mode works.