Reports: Document and collaborate on your discoveries
How does W&B work?
Read the following sections in this order if you are a first-time user of W&B and you are interested in training, tracking, and visualizing machine learning models and experiments:
Learn about runs, W&B’s basic unit of computation.
Create and track machine learning experiments with Experiments.
Discover W&B’s flexible and lightweight building block for dataset and model versioning with Artifacts.
Automate hyperparameter search and explore the space of possible models with Sweeps.
Manage the model lifecycle from training to production with Registry.
Visualize predictions across model versions with our Data Visualization guide.
Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with Reports.
Are you a first-time user of W&B?
Try the quickstart to learn how to install W&B and how to add W&B to your code.
1 - W&B Quickstart
W&B Quickstart
Install W&B and start tracking your machine learning experiments in minutes.
1. Create an account and install W&B
Before you get started, make sure you create an account and install W&B:
Initialize a W&B Run object in your Python script or notebook with wandb.init() and pass a dictionary to the config parameter with key-value pairs of hyperparameter names and values:
run = wandb.init(
# Set the project where this run will be logged project="my-awesome-project",
# Track hyperparameters and run metadata config={
"learning_rate": 0.01,
"epochs": 10,
},
)
Putting it all together, your training script might look similar to the following code example. The highlighted code shows W&B-specific code.
Note that we added code that mimics machine learning training.
# train.pyimport wandb
import random # for demo script# highlight-next-linewandb.login()
epochs =10lr =0.01# highlight-startrun = wandb.init(
# Set the project where this run will be logged project="my-awesome-project",
# Track hyperparameters and run metadata config={
"learning_rate": lr,
"epochs": epochs,
},
)
# highlight-endoffset = random.random() /5print(f"lr: {lr}")
# simulating a training runfor epoch in range(2, epochs):
acc =1-2**-epoch - random.random() / epoch - offset
loss =2**-epoch + random.random() / epoch + offset
print(f"epoch={epoch}, accuracy={acc}, loss={loss}")
# highlight-next-line wandb.log({"accuracy": acc, "loss": loss})
# run.log_code()
That’s it. Navigate to the W&B App at https://wandb.ai/home to view how the metrics we logged with W&B (accuracy and loss) improved during each training step.
The image above (click to expand) shows the loss and accuracy that was tracked from each time we ran the script above. Each run object that was created is show within the Runs column. Each run name is randomly generated.
What’s next?
Explore the rest of the W&B ecosystem.
Check out W&B Integrations to learn how to integrate W&B with your ML framework such as PyTorch, ML library such as Hugging Face, or ML service such as SageMaker.
Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with W&B Reports.
Create W&B Artifacts to track datasets, models, dependencies, and results through each step of your machine learning pipeline.
Automate hyperparameter search and explore the space of possible models with W&B Sweeps.
Understand your datasets, visualize model predictions, and share insights in a central dashboard.
Navigate to W&B AI Academy and learn about LLMs, MLOps and W&B Models from hands-on courses.
2 - W&B Models
W&B Models is the system of record for ML Practitioners who want to organize their models, boost productivity and collaboration, and deliver production ML at scale.
Configure custom automations that trigger key workflows for model CI/CD.
Machine learning practitioners rely on W&B Models as their ML system of record to track and visualize experiments, manage model versions and lineage, and optimize hyperparameters.
Track machine learning experiments with a few lines of code. You can then review the results in an interactive dashboard or export your data to Python for programmatic access using our Public API.
Utilize W&B Integrations if you use popular frameworks such as PyTorch, Keras, or Scikit. See our Integration guides for a for a full list of integrations and information on how to add W&B to your code.
The image above shows an example dashboard where you can view and compare metrics across multiple runs.
How it works
Track a machine learning experiment with a few lines of code:
Store a dictionary of hyperparameters, such as learning rate or model type, into your configuration (wandb.config).
Log metrics (wandb.log()) over time in a training loop, such as accuracy and loss.
Save outputs of a run, like the model weights or a table of predictions.
The proceeding pseudocode demonstrates a common W&B Experiment tracking workflow:
# 1. Start a W&B Runwandb.init(entity="", project="my-project-name")
# 2. Save mode inputs and hyperparameterswandb.config.learning_rate =0.01# Import model and datamodel, dataloader = get_model(), get_data()
# Model training code goes here# 3. Log metrics over time to visualize performancewandb.log({"loss": loss})
# 4. Log an artifact to W&Bwandb.log_artifact(model)
How to get started
Depending on your use case, explore the following resources to get started with W&B Experiments:
Read the W&B Quickstart for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact.
Use the W&B Python SDK to track machine learning experiments. You can then review the results in an interactive dashboard or export your data to Python for programmatic access with the W&B Public API.
This guide describes how to use W&B building blocks to create a W&B Experiment.
At the beginning of your script call, the wandb.init() API to generate a background process to sync and log data as a W&B Run.
The proceeding code snippet demonstrates how to create a new W&B project named “cat-classification”. A note “My first experiment” was added to help identify this run. Tags “baseline” and “paper1” are included to remind us that this run is a baseline experiment intended for a future paper publication.
# Import the W&B Python Libraryimport wandb
# 1. Start a W&B Runrun = wandb.init(
project="cat-classification",
notes="My first experiment",
tags=["baseline", "paper1"],
)
A Run object is returned when you initialize W&B with wandb.init(). Additionally, W&B creates a local directory where all logs and files are saved and streamed asynchronously to a W&B server.
Note: Runs are added to pre-existing projects if that project already exists when you call wandb.init(). For example, if you already have a project called “cat-classification”, that project will continue to exist and not be deleted. Instead, a new run is added to that project.
Capture a dictionary of hyperparameters
Save a dictionary of hyperparameters such as learning rate or model type. The model settings you capture in config are useful later to organize and query your results.
# 2. Capture a dictionary of hyperparameterswandb.config = {"epochs": 100, "learning_rate": 0.001, "batch_size": 128}
Log metrics during each for loop (epoch), the accuracy and loss values are computed and logged to W&B with wandb.log(). By default, when you call wandb.log it appends a new step to the history object and updates the summary object.
The following code example shows how to log metrics with wandb.log.
Details of how to set up your mode and retrieve data are omitted.
# Set up model and datamodel, dataloader = get_model(), get_data()
for epoch in range(wandb.config.epochs):
for batch in dataloader:
loss, accuracy = model.training_step()
# 3. Log metrics inside your training loop to visualize# model performance wandb.log({"accuracy": accuracy, "loss": loss})
Optionally log a W&B Artifact. Artifacts make it easy to version datasets and models.
wandb.log_artifact(model)
For more information about Artifacts, see the Artifacts Chapter. For more information about versioning models, see Model Management.
Putting it all together
The full script with the preceding code snippets is found below:
# Import the W&B Python Libraryimport wandb
# 1. Start a W&B Runrun = wandb.init(project="cat-classification", notes="", tags=["baseline", "paper1"])
# 2. Capture a dictionary of hyperparameterswandb.config = {"epochs": 100, "learning_rate": 0.001, "batch_size": 128}
# Set up model and datamodel, dataloader = get_model(), get_data()
for epoch in range(wandb.config.epochs):
for batch in dataloader:
loss, accuracy = model.training_step()
# 3. Log metrics inside your training loop to visualize# model performance wandb.log({"accuracy": accuracy, "loss": loss})
# 4. Log an artifact to W&Bwandb.log_artifact(model)
# Optional: save model at the endmodel.to_onnx()
wandb.save("model.onnx")
Next steps: Visualize your experiment
Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like parallel coordinates plots, parameter importance analyzes, and more.
The following are some suggested guidelines to consider when you create experiments:
Config: Track hyperparameters, architecture, dataset, and anything else you’d like to use to reproduce your model. These will show up in columns— use config columns to group, sort, and filter runs dynamically in the app.
Project: A project is a set of experiments you can compare together. Each project gets a dedicated dashboard page, and you can easily turn on and off different groups of runs to compare different model versions.
Notes: Set a quick commit message directly from your script. Edit and access notes in the Overview section of a run in the W&B App.
Tags: Identify baseline runs and favorite runs. You can filter runs using tags. You can edit tags at a later time on the Overview section of your project’s dashboard on the W&B App.
The following code snippet demonstrates how to define a W&B Experiment using the best practices listed above:
Use the wandb.config object to save your training configuration such as:
hyperparameter
input settings such as the dataset name or model type
any other independent variables for your experiments.
The wandb.config attribute makes it easy to analyze your experiments and reproduce your work in the future. You can group by configuration values in the W&B App, compare the settings of different W&B Runs and view how different training configurations affect the output. A Run’s config attribute is a dictionary-like object, and it can be built from lots of dictionary-like objects.
Dependent variables (like loss and accuracy) or output metrics should be saved with wandb.loginstead.
Set up an experiment configuration
Configurations are typically defined in the beginning of a training script. Machine learning workflows may vary, however, so you are not required to define a configuration at the beginning of your training script.
We recommend that you avoid using dots in your config variable names. Instead, use a dash or underscore instead. Use the dictionary access syntax ["key"]["foo"] instead of the attribute access syntax config.key.foo if your script accesses wandb.config keys below the root.
The following sections outline different common scenarios of how to define your experiments configuration.
Set the configuration at initialization
Pass a dictionary at the beginning of your script when you call the wandb.init() API to generate a background process to sync and log data as a W&B Run.
The proceeding code snippet demonstrates how to define a Python dictionary with configuration values and how to pass that dictionary as an argument when you initialize a W&B Run.
import wandb
# Define a config dictionary objectconfig = {
"hidden_layer_sizes": [32, 64],
"kernel_sizes": [3],
"activation": "ReLU",
"pool_sizes": [2],
"dropout": 0.5,
"num_classes": 10,
}
# Pass the config dictionary when you initialize W&Brun = wandb.init(project="config_example", config=config)
You can pass a nested dictionary to wandb.config(). W&B will flatten the names using dots in the W&B backend.
Access the values from the dictionary similarly to how you access other dictionaries in Python:
# Access values with the key as the index valuehidden_layer_sizes = wandb.config["hidden_layer_sizes"]
kernel_sizes = wandb.config["kernel_sizes"]
activation = wandb.config["activation"]
# Python dictionary get() methodhidden_layer_sizes = wandb.config.get("hidden_layer_sizes")
kernel_sizes = wandb.config.get("kernel_sizes")
activation = wandb.config.get("activation")
Throughout the Developer Guide and examples we copy the configuration values into separate variables. This step is optional. It is done for readability.
Set the configuration with argparse
You can set your configuration with an argparse object. argparse, short for argument parser, is a standard library module in Python 3.2 and above that makes it easy to write scripts that take advantage of all the flexibility and power of command line arguments.
This is useful for tracking results from scripts that are launched from the command line.
The proceeding Python script demonstrates how to define a parser object to define and set your experiment config. The functions train_one_epoch and evaluate_one_epoch are provided to simulate a training loop for the purpose of this demonstration:
# config_experiment.pyimport wandb
import argparse
import numpy as np
import random
# Training and evaluation demo codedeftrain_one_epoch(epoch, lr, bs):
acc =0.25+ ((epoch /30) + (random.random() /10))
loss =0.2+ (1- ((epoch -1) /10+ random.random() /5))
return acc, loss
defevaluate_one_epoch(epoch):
acc =0.1+ ((epoch /20) + (random.random() /10))
loss =0.25+ (1- ((epoch -1) /10+ random.random() /6))
return acc, loss
defmain(args):
# Start a W&B Run run = wandb.init(project="config_example", config=args)
# Access values from config dictionary and store them# into variables for readability lr = wandb.config["learning_rate"]
bs = wandb.config["batch_size"]
epochs = wandb.config["epochs"]
# Simulate training and logging values to W&Bfor epoch in np.arange(1, epochs):
train_acc, train_loss = train_one_epoch(epoch, lr, bs)
val_acc, val_loss = evaluate_one_epoch(epoch)
wandb.log(
{
"epoch": epoch,
"train_acc": train_acc,
"train_loss": train_loss,
"val_acc": val_acc,
"val_loss": val_loss,
}
)
if __name__ =="__main__":
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
parser.add_argument("-b", "--batch_size", type=int, default=32, help="Batch size")
parser.add_argument(
"-e", "--epochs", type=int, default=50, help="Number of training epochs" )
parser.add_argument(
"-lr", "--learning_rate", type=int, default=0.001, help="Learning rate" )
args = parser.parse_args()
main(args)
Set the configuration throughout your script
You can add more parameters to your config object throughout your script. The proceeding code snippet demonstrates how to add new key-value pairs to your config object:
import wandb
# Define a config dictionary objectconfig = {
"hidden_layer_sizes": [32, 64],
"kernel_sizes": [3],
"activation": "ReLU",
"pool_sizes": [2],
"dropout": 0.5,
"num_classes": 10,
}
# Pass the config dictionary when you initialize W&Brun = wandb.init(project="config_example", config=config)
# Update config after you initialize W&Bwandb.config["dropout"] =0.2wandb.config.epochs =4wandb.config["batch_size"] =32
Use the W&B Public API to update your config (or anything else about from a complete Run) after your Run. This is particularly useful if you forgot to log a value during a Run.
Provide your entity, project name, and the Run ID to update your configuration after a Run has finished. Find these values directly from the Run object itself wandb.run or from the W&B App UI:
api = wandb.Api()
# Access attributes directly from the run object# or from the W&B Appusername = wandb.run.entity
project = wandb.run.project
run_id = wandb.run.id
run = api.run(f"{username}/{project}/{run_id}")
run.config["bar"] =32run.update()
flags.DEFINE_string("model", None, "model to run") # name, default, helpwandb.config.update(flags.FLAGS) # adds absl flags to config
File-Based Configs
If you place a file named config-defaults.yaml in the same directory as your run script, the run automatically picks up the key-value pairs defined in the file and passes them to wandb.config.
The following code snippet shows a sample config-defaults.yaml YAML file:
# config-defaults.yaml@@ -224,9 +225,21 @@ batch_size:
desc: Size of each mini-batchvalue: 32
You can override the default values automatically loaded from config-defaults.yaml by setting updated values in the config argument of wandb.init. For example:
To load a configuration file other than config-defaults.yaml, use the --configs command-line argument and specify the path to the file:
python train.py --configs other-config.yaml
Example use case for file-based configs
Suppose you have a YAML file with some metadata for the run, and then a dictionary of hyperparameters in your Python script. You can save both in the nested config object:
Reports: saved snapshots of notes, runs, and graphs
Artifacts: Contains all runs and the artifacts associated with that run
Overview tab
Project name: The name of the project. W&B creates a project for you when you initialize a run with the name you provide for the project field. You can change the name of the project at any time by selecting the Edit button in the upper right corner.
Description: A description of the project.
Project visibility: The visibility of the project. The visibility setting that determines who can access it. See Project visibility for more information.
Last active: Timestamp of the last time data is logged to this project
Owner: The entity that owns this project
Contributors: The number of users that contribute to this project
Total runs: The total number of runs in this project
Total compute: we add up all the run times in your project to get this total
Undelete runs: Click the dropdown menu and click “Undelete all runs” to recover deleted runs in your project.
Delete project: click the dot menu in the right corner to delete a project
A project’s workspace gives you a personal sandbox to compare experiments. Use projects to organize models that can be compared, working on the same problem with different architectures, hyperparameters, datasets, preprocessing etc.
Runs Sidebar: list of all the runs in your project
Dot menu: hover over a row in the sidebar to see the menu appear on the left side. Use this menu to rename a run, delete a run, or stop and active run.
Visibility icon: click the eye to turn on and off runs on graphs
Color: change the run color to another one of our presets or a custom color
Search: search runs by name. This also filters visible runs in the plots.
Filter: use the sidebar filter to narrow down the set of runs visible
Group: select a config column to dynamically group your runs, for example by architecture. Grouping makes plots show up with a line along the mean value, and a shaded region for the variance of points on the graph.
Sort: pick a value to sort your runs by, for example runs with the lowest loss or highest accuracy. Sorting will affect which runs show up on the graphs.
Expand button: expand the sidebar into the full table
Run count: the number in parentheses at the top is the total number of runs in the project. The number (N visualized) is the number of runs that have the eye turned on and are available to be visualized in each plot. In the example below, the graphs are only showing the first 10 of 183 runs. Edit a graph to increase the max number of runs visible.
Panels layout: use this scratch space to explore results, add and remove charts, and compare versions of your models based on different metrics
Click the section dropdown menu and click “Add section” to create a new section for panels. You can rename sections, drag them to reorganize them, and expand and collapse sections.
Each section has options in the upper right corner:
Switch to custom layout: The custom layout allows you to resize panels individually.
Switch to standard layout: The standard layout lets you resize all panels in the section at once, and gives you pagination.
Add section: Add a section above or below from the dropdown menu, or click the button at the bottom of the page to add a new section.
Rename section: Change the title for your section.
Export section to report: Save this section of panels to a new report.
Delete section: Remove the whole section and all the charts. This can be undone with the undo button at the bottom of the page in the workspace bar.
Add panel: Click the plus button to add a panel to the section.
Move panels between sections
Drag and drop panels to reorder and organize into sections. You can also click the “Move” button in the upper right corner of a panel to select a section to move the panel to.
Resize panels
Standard layout: All panels maintain the same size, and there are pages of panels. You can resize the panels by clicking and dragging the lower right corner. Resize the section by clicking and dragging the lower right corner of the section.
Custom layout: All panels are sized individually, and there are no pages.
Search for metrics
Use the search box in the workspace to filter down the panels. This search matches the panel titles, which are by default the name of the metrics visualized.
Runs tab
Use the runs tab to filter, group, and sort your results.
The proceeding tabs demonstrate some common actions you can take in the runs tab.
Sort all rows in a Table by the value in a given column.
Hover your mouse over the column title. A kebob menu will appear (three vertical docs).
Select on the kebob menu (three vertical dots).
Choose Sort Asc or Sort Desc to sort the rows in ascending or descending order, respectively.
The preceding image demonstrates how to view sorting options for a Table column called val_acc.
Filter all rows by an expression with the Filter button on the top left of the dashboard.
Select Add filter to add one or more filters to your rows. Three dropdown menus will appear. From left to right the filter types are based on: Column name, Operator , and Values
Column name
Binary relation
Value
Accepted values
String
=, ≠, ≤, ≥, IN, NOT IN,
Integer, float, string, timestamp, null
The expression editor shows a list of options for each term using autocomplete on column names and logical predicate structure. You can connect multiple logical predicates into one expression using “and” or “or” (and sometimes parentheses).
The preceding image shows a filter that is based on the `val_loss` column. The filter shows runs with a validation loss less than or equal to 1.
Group all rows by the value in a particular column with the Group by button in a column header.
By default, this turns other numeric columns into histograms showing the distribution of values for that column across the group. Grouping is helpful for understanding higher-level patterns in your data.
Reports tab
See all the snapshots of results in one place, and share findings with your team.
On the overview panel, you’ll find a variety of high-level information about the artifact, including its name and version, the hash digest used to detect changes and prevent duplication, the creation date, and any aliases. You can add or remove aliases here, take notes on both the version as well as the artifact as a whole.
Metadata panel
The metadata panel provides access to the artifact’s metadata, which is provided when the artifact is constructed. This metadata might include configuration arguments required to reconstruct the artifact, URLs where more information can be found, or metrics produced during the run which logged the artifact. Additionally, you can see the configuration for the run which produced the artifact as well as the history metrics at the time of logging the artifact.
Usage panel
The Usage panel provides a code snippet for downloading the artifact for use outside of the web app, for example on a local machine. This section also indicates and links to the run which output the artifact and any runs which use the artifact as an input.
Files panel
The files panel lists the files and folders associated with the artifact. W&B uploads certain files for a run automatically. For example, requirements.txt shows the versions of each library the run used, and wandb-metadata.json, and wandb-summary.json include information about the run. Other files may be uploaded, such as artifacts or media, depending on the run’s configuration. You can navigate through this file tree and view the contents directly in the W&B web app.
Tables associated with artifacts are particularly rich and interactive in this context. Learn more about using Tables with Artifacts here.
Lineage panel
The lineage panel provides a view of all of the artifacts associated with a project and the runs that connect them to each other. It shows run types as blocks and artifacts as circles, with arrows to indicate when a run of a given type consumes or produces an artifact of a given type. The type of the particular artifact selected in the left-hand column is highlighted.
Click the Explode toggle to view all of the individual artifact versions and the specific runs that connect them.
Action History Audit tab
The action history audit tab shows all of the alias actions and membership changes for a Collection so you can audit the entire evolution of the resource.
Versions tab
The versions tab shows all versions of the artifact as well as columns for each of the numeric values of the Run History at the time of logging the version. This allows you to compare performance and quickly identify versions of interest.
Star a project
Add a star to a project to mark that project as important. Projects that you and your team mark as important with stars appear at the top of your organization’s home page.
For example, the proceeding image shows two projects that are marked as important, the zoo_experiment and registry_demo. Both projects appear within the top of the organization’s home page within the Starred projects section.
There are two ways to mark a project as important: within a project’s overview tab or within your team’s profile page.
Navigate to your W&B project on the W&B App at https://wandb.ai/<team>/<project-name>.
Select the Overview tab from the project sidebar.
Choose the star icon in the upper right corner next to the Edit button.
Navigate to your team’s profile page at https://wandb.ai/<team>/projects.
Select the Projects tab.
Hover your mouse next to the project you want to star. Click on star icon that appears.
For example, the proceeding image shows the star icon next to the “Compare_Zoo_Models” project.
Confirm that your project appears on the landing page of your organization by clicking on the organization name in the top left corner of the app.
Delete a project
You can delete your project by clicking the three dots on the right of the overview tab.
If the project is empty, you can delete it by clicking the dropdown menu in the top-right and selecting Delete project.
Add notes to a project
Add notes to your project either as a description overview or as a markdown panel within your workspace.
Add description overview to a project
Descriptions you add to your page appear in the Overview tab of your profile.
Navigate to your W&B project
Select the Overview tab from the project sidebar
Choose Edit in the upper right hand corner
Add your notes in the Description field
Select the Save button
Create reports to create descriptive notes comparing runs
You can also create a W&B Report to add plots and markdown side by side. Use different sections to show different runs, and tell a story about what you worked on.
Add notes to run workspace
Navigate to your W&B project
Select the Workspace tab from the project sidebar
Choose the Add panels button from the top right corner
Select the TEXT AND CODE dropdown from the modal that appears
Select Markdown
Add your notes in the markdown panel that appears in your workspace
2.1.4 - View experiments results
A playground for exploring run data with interactive visualizations
W&B workspace is your personal sandbox to customize charts and explore model results. A W&B workspace consists of Tables and Panel sections:
Tables: All runs logged to your project are listed in the project’s table. Turn on and off runs, change colors, and expand the table to see notes, config, and summary metrics for each run.
Panel sections: A section that contains one or more panels. Create new panels, organize them, and export to reports to save snapshots of your workspace.
Workspace types
There are two main workspace categories: Personal workspaces and Saved views.
Personal workspaces: A customizable workspace for in-depth analysis of models and data visualizations. Only the owner of the workspace can edit and save changes. Teammates can view a personal workspace but teammates can not make changes to someone else’s personal workspace.
Saved views: Saved views are collaborative snapshots of a workspace. Anyone on your team can view, edit, and save changes to saved workspace views. Use saved workspace views for reviewing and discussing experiments, runs, and more.
The proceeding image shows multiple personal workspaces created by Cécile-parker’s teammates. In this project, there are no saved views:
Saved workspace views
Improve team collaboration with tailored workspace views. Create Saved Views to organize your preferred setup of charts and data.
Create a new saved workspace view
Navigate to a personal workspace or a saved view.
Make edits to the workspace.
Click on the meatball menu (three horizontal dots) at the top right corner of your workspace. Click on Save as a new view.
New saved views appear in the workspace navigation menu.
Update a saved workspace view
Saved changes overwrite the previous state of the saved view. Unsaved changes are not retained. To update a saved workspace view in W&B:
Navigate to a saved view.
Make the desired changes to your charts and data within the workspace.
Click the Save button to confirm your changes.
A confirmation dialog appears when you save your updates to a workspace view. If you prefer not to see this prompt in the future, select the option Do not show this modal next time before confirming your save.
Delete a saved workspace view
Remove saved views that are no longer needed.
Navigate to the saved view you want to remove.
Select the three horizontal lines (…) at the top right of the view.
Choose Delete view.
Confirm the deletion to remove the view from your workspace menu.
Share a workspace view
Share your customized workspace with your team by sharing the workspace URL directly. All users with access to the workspace project can see the saved Views of that workspace.
Programmatically creating workspaces
wandb-workspaces is a Python library for programmatically working with W&B workspaces and reports.
Define a workspace programmatically with wandb-workspaces. wandb-workspaces is a Python library for programmatically working with W&B workspaces and reports.
You can define the workspace’s properties, such as:
Set panel layouts, colors, and section orders.
Configure workspace settings like default x-axis, section order, and collapse states.
Add and customize panels within sections to organize workspace views.
Load and modify existing workspaces using a URL.
Save changes to existing workspaces or save as new views.
Filter, group, and sort runs programmatically using simple expressions.
Customize run appearance with settings like colors and visibility.
Copy views from one workspace to another for integration and reuse.
Install Workspace API
In addition to wandb, ensure that you install wandb-workspaces:
pip install wandb wandb-workspaces
Define and save a workspace view programmatically
import wandb_workspaces.reports.v2 as wr
workspace = ws.Workspace(entity="your-entity", project="your-project", views=[...])
workspace.save()
Learn about the basic building block of W&B, Runs.
A run is a single unit of computation logged by W&B. You can think of a W&B run as an atomic element of your whole project. In other words, each run is a record of a specific computation, such as training a model and logging the results, hyperparameter sweeps, and so forth.
Common patterns for initiating a run include, but are not limited to:
Training a model
Changing a hyperparameter and conducting a new experiment
Conducting a new machine learning experiment with a different model
W&B stores runs that you create into projects. You can view runs and their properties within the run’s project workspace on the W&B App UI. You can also programmatically access run properties with the wandb.Api.Run object.
Anything you log with run.log is recorded in that run. Consider the proceeding code snippet.
import wandb
run = wandb.init(entity="nico", project="awesome-project")
run.log({"accuracy": 0.9, "loss": 0.1})
The first line imports the W&B Python SDK. The second line initializes a run in the project awesome-project under the entity nico. The third line logs the accuracy and loss of the model to that run.
Within the terminal, W&B returns:
wandb: Syncing run earnest-sunset-1
wandb: ⭐️ View project at https://wandb.ai/nico/awesome-project
wandb: 🚀 View run at https://wandb.ai/nico/awesome-project/runs/1jx1ud12
wandb:
wandb:
wandb: Run history:
wandb: accuracy ▁
wandb: loss ▁
wandb:
wandb: Run summary:
wandb: accuracy 0.9
wandb: loss 0.5
wandb:
wandb: 🚀 View run earnest-sunset-1 at: https://wandb.ai/nico/awesome-project/runs/1jx1ud12
wandb: ⭐️ View project at: https://wandb.ai/nico/awesome-project
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)wandb: Find logs at: ./wandb/run-20241105_111006-1jx1ud12/logs
The URL W&B returns in the terminal to redirects you to the run’s workspace in the W&B App UI. Note that the panels generated in the workspace corresponds to the single point.
Logging a metrics at a single point of time might not be that useful. A more realistic example in the case of training discriminative models is to log metrics at regular intervals. For example, consider the proceeding code snippet:
The training script calls run.log 10 times. Each time the script calls run.log, W&B logs the accuracy and loss for that epoch. Selecting the URL that W&B prints from the preceding output, directs you to the run’s workspace in the W&B App UI.
Note that W&B captures the simulated training loop within a single run called jolly-haze-4. This is because the script calls wandb.init method only once.
As another example, during a sweep, W&B explores a hyperparameter search space that you specify. W&B implements each new hyperparameter combination that the sweep creates as a unique run.
Initialize a run
Initialize a W&B run with wandb.init(). The proceeding code snippet shows how to import the W&B Python SDK and initialize a run.
Ensure to replace values enclosed in angle brackets (< >) with your own values:
import wandb
run = wandb.init(entity="<entity>", project="<project>")
When you initialize a run, W&B logs your run to the project you specify for the project field (wandb.init(project="<project>"). W&B creates a new project if the project does not already exist. If the project already exists, W&B stores the run in that project.
If you do not specify a project name, W&B stores the run in a project called Uncategorized.
For example, consider the proceeding code snippet:
import wandb
run = wandb.init(entity="wandbee", project="awesome-project")
The code snippet produces the proceeding output:
🚀 View run exalted-darkness-6 at:
https://wandb.ai/nico/awesome-project/runs/pgbn9y21
Find logs at: wandb/run-20241106_090747-pgbn9y21/logs
Since the preceding code did not specify an argument for the id parameter, W&B creates a unique run ID. Where nico is the entity that logged the run, awesome-project is the name of the project the run is logged to, exalted-darkness-6 is the name of the run, and pgbn9y21 is the run ID.
Notebook users
Specify run.finish() at the end of your run to mark the run finished. This helps ensure that the run is properly logged to your project and does not continue in the background.
import wandb
run = wandb.init(entity="<entity>", project="<project>")
# Training code, logging, and so forthrun.finish()
Each run has a state that describes the current status of the run. See Run states for a full list of possible run states.
Run states
The proceeding table describes the possible states a run can be in:
State
Description
Finished
Run ended and fully synced data, or called wandb.finish()
Failed
Run ended with a non-zero exit status
Crashed
Run stopped sending heartbeats in the internal process, which can happen if the machine crashes
Running
Run is still running and has recently sent a heartbeat
If you do not specify a run ID when you initialize a run, W&B generates a random run ID for you. You can find the unique ID of a run in the W&B App UI.
Navigate to the W&B project you specified when you initialized the run.
Within your project’s workspace, select the Runs tab.
Select the Overview tab.
W&B displays the unique run ID in the Run path field. The run path consists of the name of your team, the name of the project, and the run ID. The unique ID is the last part of the run path.
For example, in the proceeding image, the unique run ID is 9mxi1arc:
Custom run IDs
You can specify your own run ID by passing the id parameter to the wandb.init method.
import wandb
run = wandb.init(entity="<project>", project="<project>", id="<run-id>")
You can use a run’s unique ID to directly navigate to the run’s overview page in the W&B App UI. The proceeding cell shows the URL path for a specific run:
https://wandb.ai/<entity>/<project>/<run-id>
Where values enclosed in angle brackets (< >) are placeholders for the actual values of the entity, project, and run ID.
Name your run
The name of a run is a human-readable, non-unique identifier.
By default, W&B generates a random run name when you initialize a new run. The name of a run appears within your project’s workspace and at the top of the run’s overview page.
Use run names as a way to quickly identify a run in your project workspace.
You can specify a name for your run by passing the name parameter to the wandb.init method.
import wandb
run = wandb.init(entity="<project>", project="<project>", name="<run-name>")
Add a note to a run
Notes that you add to a specific run appear on the run page in the Overview tab and in the table of runs on the project page.
Navigate to your W&B project
Select the Workspace tab from the project sidebar
Select the run you want to add a note to from the run selector
Choose the Overview tab
Select the pencil icon next to the Description field and add your notes
Stop a run
Stop a run from the W&B App or programmatically.
Navigate to the terminal or code editor where you initialized the run.
Press Ctrl+D to stop the run.
For example, following the preceding instructions, your terminal might looks similar to the following:
KeyboardInterrupt
wandb: 🚀 View run legendary-meadow-2 at: https://wandb.ai/nico/history-blaster-4/runs/o8sdbztv
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 1 other file(s)wandb: Find logs at: ./wandb/run-20241106_095857-o8sdbztv/logs
Navigate to the W&B App UI to confirm the run is no longer active:
Navigate to the project that your run was logging to.
Select the name of the run.
You can find the name of the run that you stop from the output of your terminal or code editor. For example, in the preceding example, the name of the run is legendary-meadow-2.
3. Choose the **Overview** tab from the project sidebar.
Next to the State field, the run’s state changes from running to Killed.
Navigate to the project that your run is logging to.
Select the run you want to stop within the run selector.
Choose the Overview tab from the project sidebar.
Select the top button next to the State field.
Next to the State field, the run’s state changes from running to Killed.
See State fields for a full list of possible run states.
View logged runs
View a information about a specific run such as the state of the run, artifacts logged to the run, log files recorded during the run, and more.
Navigate to the W&B project you specified when you initialized the run.
Within the project sidebar, select the Workspace tab.
Within the run selector, click the run you want to view, or enter a partial run name to filter for matching runs.
By default, long run names are truncated in the middle for readability. To truncate run names at the beginning or end instead, click the action ... menu at the top of the list of runs, then set Run name cropping to crop the end, middle, or beginning.
Note that the URL path of a specific run has the proceeding format:
Where values enclosed in angle brackets (< >) are placeholders for the actual values of the team name, project name, and run ID.
Overview tab
Use the Overview tab to learn about specific run information in a project, such as:
Author: The W&B entity that creates the run.
Command: The command that initializes the run.
Description: A description of the run that you provided. This field is empty if you do not specify a description when you create the run. You can add a description to a run with the W&B App UI or programmatically with the Python SDK.
Duration: The amount of time the run is actively computing or logging data, excluding any pauses or waiting.
Git repository: The git repository associated with the run. You must enable git to view this field.
Host name: Where W&B computes the run. W&B displays the name of your machine if you initialize the run locally on your machine.
Name: The name of the run.
OS: Operating system that initializes the run.
Python executable: The command that starts the run.
Python version: Specifies the Python version that creates the run.
Run path: Identifies the unique run identifier in the form entity/project/run-ID.
Runtime: Measures the total time from the start to the end of the run. It’s the wall-clock time for the run. Runtime includes any time where the run is paused or waiting for resources, while duration does not.
Start time: The timestamp when you initialize the run.
The System tab shows system metrics tracked for a specific run such as CPU utilization, system memory, disk I/O, network traffic, GPU utilization and more.
For a full list of system metrics W&B tracks, see System metrics.
Delete one or more runs from a project with the W&B App.
Navigate to the project that contains the runs you want to delete.
Select the Runs tab from the project sidebar.
Select the checkbox next to the runs you want to delete.
Choose the Delete button (trash can icon) above the table.
From the modal that appears, choose Delete.
For projects that contain a large number of runs, you can use either the search bar to filter runs you want to delete using Regex or the filter button to filter runs based on their status, tags, or other properties.
2.1.5.1 - Add labels to runs with tags
Add tags to label runs with particular features that might not be obvious from the logged metrics or artifact data.
For example, you can add a tag to a run to indicated that run’s model is in_production, that run is preemptible, this run represents the baseline, and so forth.
Add tags to one or more runs
Programmatically or interactively add tags to your runs.
Based on your use case, select the tab below that best fits your needs:
You can add tags to a run when it is created:
import wandb
run = wandb.init(
entity="entity",
project="<project-name>",
tags=["tag1", "tag2"]
)
You can also update the tags after you initialize a run. For example, the proceeding code snippet shows how to update a tag if a particular metrics crosses a pre-defined threshold:
After you create a run, you can update tags using the Public API. For example:
run = wandb.Api().run("{entity}/{project}/{run-id}")
run.tags.append("tag1") # you can choose tags based on run data hererun.update()
This method is best suited to tagging large numbers of runs with the same tag or tags.
Navigate to your project workspace.
Select Runs in the from the project sidebar.
Select one or more runs from the table.
Once you select one or more runs, select the Tag button above the table.
Type the tag you want to add and select the Create new tag checkbox to add the tag.
This method is best suited to applying a tag or tags to a single run manually.
Navigate to your project workspace.
Select a run from the list of runs within your project’s workspace.
Select Overview from the project sidebar.
Select the gray plus icon (+) button next to Tags.
Type a tag you want to add and select Add below the text box to add a new tag.
Remove tags from one or more runs
Tags can also be removed from runs with the W&B App UI.
This method is best suited to removing tags from a large numbers of runs.
In the Run sidebar of the project, select the table icon in the upper-right. This will expand the sidebar into the full runs table.
Hover over a run in the table to see a checkbox on the left or look in the header row for a checkbox to select all runs.
Select the checkbox to enable bulk actions.
Select the runs you want to remove tags.
Select the Tag button above the rows of runs.
Select the checkbox next to a tag to remove it from the run.
In the left sidebar of the Run page, select the top Overview tab. The tags on the run are visible here.
Hover over a tag and select the “x” to remove it from the run.
2.1.5.2 - Filter and search runs
How to use the sidebar and table on the project page
Use your project page to gain insights from runs logged to W&B.
Filter runs
Filter runs based on their status, tags, or other properties with the filter button.
Filter runs with tags
Filter runs based on their tags with the filter button.
Filter runs with regex
If regex doesn’t provide you the desired results, you can make use of tags to filter out the runs in Runs Table. Tags can be added either on run creation or after they’re finished. Once the tags are added to a run, you can add a tag filter as shown in the gif below.
Search run names
Use regex to find runs with the regex you specify. When you type a query in the search box, that will filter down the visible runs in the graphs on the workspace as well as filtering the rows of the table.
Sort runs by minimum and maximum values
Sort the runs table by the minimum or maximum value of a logged metric. This is particularly useful if you want to view the best (or worst) recorded value.
The following steps describe how to sort the run table by a specific metric based on the minimum or maximum recorded value:
Hover your mouse over the column with the metric you want to sort with.
Select the kebob menu (three vertical lines).
From the dropdown, select either Show min or Show max.
From the same dropdown, select Sort by asc or Sort by desc to sort in ascending or descending order, respectively.
Search End Time for runs
We provide a column named End Time that logs that last heartbeat from the client process. The field is hidden by default.
Export runs table to CSV
Export the table of all your runs, hyperparameters, and summary metrics to a CSV with the download button.
2.1.5.3 - Fork a run
Forking a W&B run
The ability to fork a run is in private preview. Contact W&B Support at support@wandb.com to request access to this feature.
Use fork_from when you initialize a run with wandb.init() to “fork” from an existing W&B run. When you fork from a run, W&B creates a new run using the run ID and step of the source run.
Forking a run enables you to explore different parameters or models from a specific point in an experiment without impacting the original run.
Forking a run requires wandb SDK version >= 0.16.5
Forking a run requires monotonically increasing steps. You can not use non-monotonic steps defined with define_metric() to set a fork point because it would disrupt the essential chronological order of run history and system metrics.
Start a forked run
To fork a run, use the fork_from argument in wandb.init() and specify the source run ID and the step from the source run to fork from:
import wandb
# Initialize a run to be forked lateroriginal_run = wandb.init(project="your_project_name", entity="your_entity_name")
# ... perform training or logging ...original_run.finish()
# Fork the run from a specific stepforked_run = wandb.init(
project="your_project_name",
entity="your_entity_name",
fork_from=f"{original_run.id}?_step=200",
)
Using an immutable run ID
Use an immutable run ID to ensure you have a consistent and unchanging reference to a specific run. Follow these steps to obtain the immutable run ID from the user interface:
Access the Overview Tab: Navigate to the Overview tab on the source run’s page.
Copy the Immutable Run ID: Click on the ... menu (three dots) located in the top-right corner of the Overview tab. Select the Copy Immutable Run ID option from the dropdown menu.
By following these steps, you will have a stable and unchanging reference to the run, which can be used for forking a run.
Continue from a forked run
After initializing a forked run, you can continue logging to the new run. You can log the same metrics for continuity and introduce new metrics.
For example, the following code example shows how to first fork a run and then how to log metrics to the forked run starting from a training step of 200:
import wandb
import math
# Initialize the first run and log some metricsrun1 = wandb.init("your_project_name", entity="your_entity_name")
for i in range(300):
run1.log({"metric": i})
run1.finish()
# Fork from the first run at a specific step and log the metric starting from step 200run2 = wandb.init(
"your_project_name", entity="your_entity_name", fork_from=f"{run1.id}?_step=200")
# Continue logging in the new run# For the first few steps, log the metric as is from run1# After step 250, start logging the spikey patternfor i in range(200, 300):
if i <250:
run2.log({"metric": i}) # Continue logging from run1 without spikeselse:
# Introduce the spikey behavior starting from step 250 subtle_spike = i + (2* math.sin(i /3.0)) # Apply a subtle spikey pattern run2.log({"metric": subtle_spike})
# Additionally log the new metric at all steps run2.log({"additional_metric": i *1.1})
run2.finish()
Rewind and forking compatibility
Forking compliments a rewind by providing more flexibility in managing and experimenting with your runs.
When you fork from a run, W&B creates a new branch off a run at a specific point to try different parameters or models.
When you rewind a run, W&B let’s you correct or modify the run history itself.
2.1.5.4 - Group runs into experiments
Group training and evaluation runs into larger experiments
Group individual jobs into experiments by passing a unique group name to wandb.init().
Use cases
Distributed training: Use grouping if your experiments are split up into different pieces with separate training and evaluation scripts that should be viewed as parts of a larger whole.
Multiple processes: Group multiple smaller processes together into an experiment.
K-fold cross-validation: Group together runs with different random seeds to see a larger experiment. Here’s an example of k-fold cross-validation with sweeps and grouping.
There are three ways to set grouping:
1. Set group in your script
Pass an optional group and job_type to wandb.init(). This gives you a dedicated group page for each experiment, which contains the individual runs. For example:wandb.init(group="experiment_1", job_type="eval")
2. Set a group environment variable
Use WANDB_RUN_GROUP to specify a group for your runs as an environment variable. For more on this, check our docs for Environment Variables. Group should be unique within your project and shared by all runs in the group. You can use wandb.util.generate_id() to generate a unique 8 character string to use in all your processes— for example, os.environ["WANDB_RUN_GROUP"] = "experiment-" + wandb.util.generate_id()
3. Toggle grouping in the UI
You can dynamically group by any config column. For example, if you use wandb.config to log batch size or learning rate, you can then group by those hyperparameters dynamically in the web app.
Distributed training with grouping
Suppose you set grouping in wandb.init(), we will group runs by default in the UI. You can toggle this on and off by clicking the Group button at the top of the table. Here’s an example project generated from sample code where we set grouping. You can click on each “Group” row in the sidebar to get to a dedicated group page for that experiment.
From the project page above, you can click a Group in the left sidebar to get to a dedicated page like this one:
Grouping dynamically in the UI
You can group runs by any column, for example by hyperparameter. Here’s an example of what that looks like:
Sidebar: Runs are grouped by the number of epochs.
Graphs: Each line represents the group’s mean, and the shading indicates the variance. This behavior can be changed in the graph settings.
Turn off grouping
Click the grouping button and clear group fields at any time, which returns the table and graphs to their ungrouped state.
Grouping graph settings
Click the edit button in the upper right corner of a graph and select the Advanced tab to change the line and shading. You can select the mean, minimum, or maximum value for the line in each group. For the shading, you can turn off shading, and show the min and max, the standard deviation, and the standard error.
2.1.5.5 - Move runs
Move runs between your projects or to a team you are a member of.
Move runs between your projects
To move runs from one project to another:
Navigate to the project that contains the runs you want to move.
Select the Runs tab from the project sidebar.
Select the checkbox next to the runs you want to move.
Choose the Move button above the table.
Select the destination project from the dropdown.
Move runs to a team
Move runs to a team you are a member of:
Navigate to the project that contains the runs you want to move.
Select the Runs tab from the project sidebar.
Select the checkbox next to the runs you want to move.
Choose the Move button above the table.
Select the destination team and project from the dropdown.
2.1.5.6 - Resume a run
Resume a paused or exited W&B Run
Specify how a run should behave in the event that run stops or crashes. To resume or enable a run to automatically resume, you will need to specify the unique run ID associated with that run for the id parameter:
run = wandb.init(entity="<entity>", \
project="<project>", id="<run ID>", resume="<resume>")
W&B encourages you to provide the name of the W&B Project where you want to store the run.
Pass one of the following arguments to the resume parameter to determine how W&B should respond. In each case, W&B first checks if the run ID already exists.
Argument
Description
Run ID exists
Run ID does not exist
Use case
"must"
W&B must resume run specified by the run ID.
W&B resumes run with the same run ID.
W&B raises an error.
Resume a run that must use the same run ID.
"allow"
Allow W&B to resume run if run ID exists.
W&B resumes run with the same run ID.
W&B initializes a new run with specified run ID.
Resume a run without overriding an existing run.
"never"
Never allow W&B to resume a run specified by run ID.
W&B raises an error.
W&B initializes a new run with specified run ID.
You can also specify resume="auto" to let W&B to automatically try to restart the run on your behalf. However, you will need to ensure that you restart your run from the same directory. See the Enable runs to automatically resume section for more information.
For all the examples below, replace values enclosed within <> with your own.
Resume a run that must use the same run ID
If a run is stopped, crashes, or fails, you can resume it using the same run ID. To do so, initialize a run and specify the following:
Set the resume parameter to "must" (resume="must")
Provide the run ID of the run that stopped or crashed
The following code snippet shows how to accomplish this with the W&B Python SDK:
run = wandb.init(entity="<entity>", \
project="<project>", id="<run ID>", resume="must")
Unexpected results will occur if multiple processes use the same id concurrently.
Resume a run that stopped or crashed without overriding the existing run. This is especially helpful if your process doesn’t exit successfully. The next time you start W&B, W&B will start logging from the last step.
Set the resume parameter to "allow" (resume="allow") when you initialize a run with W&B. Provide the run ID of the run that stopped or crashed. The following code snippet shows how to accomplish this with the W&B Python SDK:
import wandb
run = wandb.init(entity="<entity>", \
project="<project>", id="<run ID>", resume="allow")
Enable runs to automatically resume
The following code snippet shows how to enable runs to automatically resume with the Python SDK or with environment variables.
The following code snippet shows how to specify a W&B run ID with the Python SDK.
Replace values enclosed within <> with your own:
run = wandb.init(entity="<entity>", \
project="<project>", id="<run ID>", resume="<resume>")
The following example shows how to specify the W&B WANDB_RUN_ID variable in a bash script:
Within your terminal, you could run the shell script along with the W&B run ID. The following code snippet passes the run ID akj172:
sh run_experiment.sh akj172
Automatic resuming only works if the process is restarted on top of the same filesystem as the failed process.
For example, suppose you execute a python script called train.py in a directory called Users/AwesomeEmployee/Desktop/ImageClassify/training/. Within train.py, the script creates a run that enables automatic resuming. Suppose next that the training script is stopped. To resume this run, you would need to restart your train.py script within Users/AwesomeEmployee/Desktop/ImageClassify/training/ .
If you can not share a filesystem, specify the WANDB_RUN_ID environment variable or pass the run ID with the W&B Python SDK. See the Custom run IDs section in the “What are runs?” page for more information on run IDs.
Resume preemptible Sweeps runs
Automatically requeue interrupted sweep runs. This is particularly useful if you run a sweep agent in a compute environment that is subject to preemption such as a SLURM job in a preemptible queue, an EC2 spot instance, or a Google Cloud preemptible VM.
Use the mark_preempting function to enable W&B to automatically requeue interrupted sweep runs. For example, the following code snippet
run = wandb.init() # Initialize a runrun.mark_preempting()
The following table outlines how W&B handles runs based on the exit status of the a sweep run.
Status
Behavior
Status code 0
Run is considered to have terminated successfully and it will not be requeued.
Nonzero status
W&B automatically appends the run to a run queue associated with the sweep.
No status
Run is added to the sweep run queue. Sweep agents consume runs off the run queue until the queue is empty. Once the queue is empty, the sweep queue resumes generating new runs based on the sweep search algorithm.
2.1.5.7 - Rewind a run
Rewind
Rewind a run
The option to rewind a run is in private preview. Contact W&B Support at support@wandb.com to request access to this feature.
W&B currently does not support:
Log rewind: Logs are reset in the new run segment.
System metrics rewind: W&B logs only new system metrics after the rewind point.
Artifact association: W&B associates artifacts with the source run that produces them.
To rewind a run, you must have W&B Python SDK version >= 0.17.1.
You must use monotonically increasing steps. You can not use non-monotonic steps defined with define_metric() because it disrupts the required chronological order of run history and system metrics.
Rewind a run to correct or modify the history of a run without losing the original data. In addition, when you
rewind a run, you can log new data from that point in time. W&B recomputes the summary metrics for the run you rewind based on the newly logged history. This means the following behavior:
History truncation: W&B truncates the history to the rewind point, allowing new data logging.
Summary metrics: Recomputed based on the newly logged history.
Configuration preservation: W&B preserves the original configurations and you can merge new configurations.
When you rewind a run, W&B resets the state of the run to the specified step, preserving the original data and maintaining a consistent run ID. This means that:
Run archiving: W&B archives the original runs. Runs are accessible from the Run Overview tab.
Artifact association: Associates artifacts with the run that produce them.
Immutable run IDs: Introduced for consistent forking from a precise state.
Copy immutable run ID: A button to copy the immutable run ID for improved run management.
Rewind and forking compatibility
Forking compliments a rewind.
When you fork from a run, W&B creates a new branch off a run at a specific point to try different parameters or models.
When you rewind a run, W&B lets you correct or modify the run history itself.
Rewind a run
Use resume_from with wandb.init() to “rewind” a run’s history to a specific step. Specify the name of the run and the step you want to rewind from:
import wandb
import math
# Initialize the first run and log some metrics# Replace with your_project_name and your_entity_name!run1 = wandb.init(project="your_project_name", entity="your_entity_name")
for i in range(300):
run1.log({"metric": i})
run1.finish()
# Rewind from the first run at a specific step and log the metric starting from step 200run2 = wandb.init(project="your_project_name", entity="your_entity_name", resume_from=f"{run1.id}?_step=200")
# Continue logging in the new run# For the first few steps, log the metric as is from run1# After step 250, start logging the spikey patternfor i in range(200, 300):
if i <250:
run2.log({"metric": i, "step": i}) # Continue logging from run1 without spikeselse:
# Introduce the spikey behavior starting from step 250 subtle_spike = i + (2* math.sin(i /3.0)) # Apply a subtle spikey pattern run2.log({"metric": subtle_spike, "step": i})
# Additionally log the new metric at all steps run2.log({"additional_metric": i *1.1, "step": i})
run2.finish()
View an archived run
After you rewind a run, you can explore archived run with the W&B App UI. Follow these steps to view archived runs:
Access the Overview Tab: Navigate to the Overview tab on the run’s page. This tab provides a comprehensive view of the run’s details and history.
Locate the Forked From field: Within the Overview tab, find the Forked From field. This field captures the history of the resumptions. The Forked From field includes a link to the source run, allowing you to trace back to the original run and understand the entire rewind history.
By using the Forked From field, you can effortlessly navigate the tree of archived resumptions and gain insights into the sequence and origin of each rewind.
Fork from a run that you rewind
To fork from a rewound run, use the fork_from argument in wandb.init() and specify the source run ID and the step from the source run to fork from:
import wandb
# Fork the run from a specific stepforked_run = wandb.init(
project="your_project_name",
entity="your_entity_name",
fork_from=f"{rewind_run.id}?_step=500",
)
# Continue logging in the new runfor i in range(500, 1000):
forked_run.log({"metric": i*3})
forked_run.finish()
2.1.5.8 - Send an alert
Send alerts, triggered from your Python code, to your Slack or email
Create alerts with Slack or email if your run crashes or with a custom trigger. For example, you can create an alert if the gradient of your training loop starts to blow up (reports NaN) or a step in your ML pipeline completes. Alerts apply to all projects where you initialize runs, including both personal and team projects.
And then see W&B Alerts messages in Slack (or your email):
How to create an alert
The following guide only applies to alerts in multi-tenant cloud.
If you’re using W&B Server in your Private Cloud or on W&B Dedicated Cloud, then please refer to this documentation to setup Slack alerts.
Turn on Scriptable run alerts to receive alerts from run.alert()
Use Connect Slack to pick a Slack channel to post alerts. We recommend the Slackbot channel because it keeps the alerts private.
Email will go to the email address you used when you signed up for W&B. We recommend setting up a filter in your email so all these alerts go into a folder and don’t fill up your inbox.
You will only have to do this the first time you set up W&B Alerts, or when you’d like to modify how you receive alerts.
2. Add run.alert() to your code
Add run.alert() to your code (either in a Notebook or Python script) wherever you’d like it to be triggered
import wandb
run = wandb.init()
run.alert(title="High Loss", text="Loss is increasing rapidly")
3. Check your Slack or email
Check your Slack or emails for the alert message. If you didn’t receive any, make sure you’ve got emails or Slack turned on for Scriptable Alerts in your User Settings
Example
This simple alert sends a warning when accuracy falls below a threshold. In this example, it only sends alerts at least 5 minutes apart.
import wandb
from wandb import AlertLevel
run = wandb.init()
if acc < threshold:
run.alert(
title="Low accuracy",
text=f"Accuracy {acc} is below the acceptable threshold {threshold}",
level=AlertLevel.WARN,
wait_duration=300,
)
How to tag or mention users
Use the at sign @ followed by the Slack user ID to tag yourself or your colleagues in either the title or the text of the alert. You can find a Slack user ID from their Slack profile page.
run.alert(title="Loss is NaN", text=f"Hey <@U1234ABCD> loss has gone to NaN")
Team alerts
Team admins can set up alerts for the team on the team settings page: wandb.ai/teams/your-team.
Team alerts apply to everyone on your team. W&B recommends using the Slackbot channel because it keeps alerts private.
Change Slack channel to send alerts to
To change what channel alerts are sent to, click Disconnect Slack and then reconnect. After you reconnect, pick a different Slack channel.
2.1.6 - Log objects and media
Keep track of metrics, videos, custom plots, and more
Log a dictionary of metrics, media, or custom objects to a step with the W&B Python SDK. W&B collects the key-value pairs during each step and stores them in one unified dictionary each time you log data with wandb.log(). Data logged from your script is saved locally to your machine in a directory called wandb, then synced to the W&B cloud or your private server.
Key-value pairs are stored in one unified dictionary only if you pass the same value for each step. W&B writes all of the collected keys and values to memory if you log a different value for step.
Each call to wandb.log is a new step by default. W&B uses steps as the default x-axis when it creates charts and panels. You can optionally create and use a custom x-axis or capture a custom summary metric. For more information, see Customize log axes.
Use wandb.log() to log consecutive values for each step: 0, 1, 2, and so on. It is not possible to write to a specific history step. W&B only writes to the “current” and “next” step.
Automatically logged data
W&B automatically logs the following information during a W&B Experiment:
System metrics: CPU and GPU utilization, network, etc. These are shown in the System tab on the run page. For the GPU, these are fetched with nvidia-smi.
Command line: The stdout and stderr are picked up and show in the logs tab on the run page.
Git commit: Pick up the latest git commit and see it on the overview tab of the run page, as well as a diff.patch file if there are any uncommitted changes.
Dependencies: The requirements.txt file will be uploaded and shown on the files tab of the run page, along with any files you save to the wandb directory for the run.
What data is logged with specific W&B API calls?
With W&B, you can decide exactly what you want to log. The following lists some commonly logged objects:
Datasets: You have to specifically log images or other dataset samples for them to stream to W&B.
Plots: Use wandb.plot with wandb.log to track charts. See Log Plots for more information.
Tables: Use wandb.Table to log data to visualize and query with W&B. See Log Tables for more information.
PyTorch gradients: Add wandb.watch(model) to see gradients of the weights as histograms in the UI.
Configuration information: Log hyperparameters, a link to your dataset, or the name of the architecture you’re using as config parameters, passed in like this: wandb.init(config=your_config_dictionary). See the PyTorch Integrations page for more information.
Metrics: Use wandb.log to see metrics from your model. If you log metrics like accuracy and loss from inside your training loop, you’ll get live updating graphs in the UI.
Common workflows
Compare the best accuracy: To compare the best value of a metric across runs, set the summary value for that metric. By default, summary is set to the last value you logged for each key. This is useful in the table in the UI, where you can sort and filter runs based on their summary metrics, to help compare runs in a table or bar chart based on their best accuracy, instead of final accuracy. For example: wandb.run.summary["best_accuracy"] = best_accuracy
Multiple metrics on one chart: Log multiple metrics in the same call to wandb.log, like this: wandb.log({"acc'": 0.9, "loss": 0.1}) and they will both be available to plot against in the UI
Custom x-axis: Add a custom x-axis to the same log call to visualize your metrics against a different axis in the W&B dashboard. For example: wandb.log({'acc': 0.9, 'epoch': 3, 'batch': 117}). To set the default x-axis for a given metric use Run.define_metric()
Create and track plots from machine learning experiments.
Using the methods in wandb.plot, you can track charts with wandb.log, including charts that change over time during training. To learn more about our custom charting framework, check out this guide.
Basic charts
These simple charts make it easy to construct basic visualizations of metrics and results.
wandb.plot.line()
Log a custom line plot—a list of connected and ordered points on arbitrary axes.
data = [[x, y] for (x, y) in zip(x_values, y_values)]
table = wandb.Table(data=data, columns=["x", "y"])
wandb.log(
{
"my_custom_plot_id": wandb.plot.line(
table, "x", "y", title="Custom Y vs X Line Plot" )
}
)
You can use this to log curves on any two dimensions. If you’re plotting two lists of values against each other, the number of values in the lists must match exactly. For example, each point must have an x and a y.
Log a custom scatter plot—a list of points (x, y) on a pair of arbitrary axes x and y.
data = [[x, y] for (x, y) in zip(class_x_scores, class_y_scores)]
table = wandb.Table(data=data, columns=["class_x", "class_y"])
wandb.log({"my_custom_id": wandb.plot.scatter(table, "class_x", "class_y")})
You can use this to log scatter points on any two dimensions. If you’re plotting two lists of values against each other, the number of values in the lists must match exactly. For example, each point must have an x and a y.
Log a custom histogram—sort a list of values into bins by count/frequency of occurrence—natively in a few lines. Let’s say I have a list of prediction confidence scores (scores) and want to visualize their distribution:
data = [[s] for s in scores]
table = wandb.Table(data=data, columns=["scores"])
wandb.log({"my_histogram": wandb.plot.histogram(table, "scores", title="Histogram")})
You can use this to log arbitrary histograms. Note that data is a list of lists, intended to support a 2D array of rows and columns.
Note that the number of x and y points must match exactly. You can supply one list of x values to match multiple lists of y values, or a separate list of x values for each list of y values.
These preset charts have built-in wandb.plot methods that make it quick and easy to log charts directly from your script and see the exact information you’re looking for in the UI.
cm = wandb.plot.confusion_matrix(
y_true=ground_truth, preds=predictions, class_names=class_names
)
wandb.log({"conf_mat": cm})
You can log this wherever your code has access to:
a model’s predicted labels on a set of examples (preds) or the normalized probability scores (probs). The probabilities must have the shape (number of examples, number of classes). You can supply either probabilities or predictions but not both.
the corresponding ground truth labels for those examples (y_true)
a full list of the labels/class names as strings of class_names. Examples: class_names=["cat", "dog", "bird"] if index 0 is cat, 1 is dog, 2 is bird.
For full customization, tweak a built-in Custom Chart preset or create a new preset, then save the chart. Use the chart ID to log data to that custom preset directly from your script.
# Create a table with the columns to plottable = wandb.Table(data=data, columns=["step", "height"])
# Map from the table's columns to the chart's fieldsfields = {"x": "step", "value": "height"}
# Use the table to populate the new custom chart preset# To use your own saved chart preset, change the vega_spec_name# To edit the title, change the string_fieldsmy_custom_chart = wandb.plot_table(
vega_spec_name="carey/new_chart",
data_table=table,
fields=fields,
string_fields={"title": "Height Histogram"},
)
Just pass a matplotlib plot or figure object to wandb.log(). By default we’ll convert the plot into a Plotly plot. If you’d rather log the plot as an image, you can pass the plot into wandb.Image. We also accept Plotly charts directly.
If you’re getting an error “You attempted to log an empty plot” then you can store the figure separately from the plot with fig = plt.figure() and then log fig in your call to wandb.log.
Log custom HTML to W&B Tables
W&B supports logging interactive charts from Plotly and Bokeh as HTML and adding them to Tables.
Log Plotly figures to Tables as HTML
You can log interactive Plotly charts to wandb Tables by converting them to HTML.
import wandb
import plotly.express as px
# Initialize a new runrun = wandb.init(project="log-plotly-fig-tables", name="plotly_html")
# Create a tabletable = wandb.Table(columns=["plotly_figure"])
# Create path for Plotly figurepath_to_plotly_html ="./plotly_figure.html"# Example Plotly figurefig = px.scatter(x=[0, 1, 2, 3, 4], y=[0, 1, 4, 9, 16])
# Write Plotly figure to HTML# Set auto_play to False prevents animated Plotly charts# from playing in the table automaticallyfig.write_html(path_to_plotly_html, auto_play=False)
# Add Plotly figure as HTML file into Tabletable.add_data(wandb.Html(path_to_plotly_html))
# Log Tablerun.log({"test_table": table})
wandb.finish()
Log Bokeh figures to Tables as HTML
You can log interactive Bokeh charts to wandb Tables by converting them to HTML.
Use define_metric to set a custom x axis.Custom x-axes are useful in contexts where you need to log to different time steps in the past during training, asynchronously. For example, this can be useful in RL where you may track the per-episode reward and a per-step reward.
By default, all metrics are logged against the same x-axis, which is the W&B internal step. Sometimes, you might want to log to a previous step, or use a different x-axis.
Here’s an example of setting a custom x-axis metric, instead of the default step.
import wandb
wandb.init()
# define our custom x axis metricwandb.define_metric("custom_step")
# define which metrics will be plotted against itwandb.define_metric("validation_loss", step_metric="custom_step")
for i in range(10):
log_dict = {
"train_loss": 1/ (i +1),
"custom_step": i**2,
"validation_loss": 1/ (i +1),
}
wandb.log(log_dict)
The x-axis can be set using globs as well. Currently, only globs that have string prefixes are available. The following example will plot all logged metrics with the prefix "train/" to the x-axis "train/step":
import wandb
wandb.init()
# define our custom x axis metricwandb.define_metric("train/step")
# set all other train/ metrics to use this stepwandb.define_metric("train/*", step_metric="train/step")
for i in range(10):
log_dict = {
"train/step": 2**i, # exponential growth w/ internal W&B step"train/loss": 1/ (i +1), # x-axis is train/step"train/accuracy": 1- (1/ (1+ i)), # x-axis is train/step"val/loss": 1/ (1+ i), # x-axis is internal wandb step }
wandb.log(log_dict)
2.1.6.3 - Log distributed training experiments
Use W&B to log distributed training experiments with multiple GPUs.
In distributed training, models are trained using multiple GPUs in parallel. W&B supports two patterns to track distributed training experiments:
One process: Initialize W&B (wandb.init) and log experiments (wandb.log) from a single process. This is a common solution for logging distributed training experiments with the PyTorch Distributed Data Parallel (DDP) Class. In some cases, users funnel data over from other processes using a multiprocessing queue (or another communication primitive) to the main logging process.
Many processes: Initialize W&B (wandb.init) and log experiments (wandb.log) in every process. Each process is effectively a separate experiment. Use the group parameter when you initialize W&B (wandb.init(group='group-name')) to define a shared experiment and group the logged values together in the W&B App UI.
The proceeding examples demonstrate how to track metrics with W&B using PyTorch DDP on two GPUs on a single machine. PyTorch DDP (DistributedDataParallel intorch.nn) is a popular library for distributed training. The basic principles apply to any distributed training setup, but the details of implementation may differ.
Explore the code behind these examples in the W&B GitHub examples repository here. Specifically, see the log-dpp.py Python script for information on how to implement one process and many process methods.
Method 1: One process
In this method we track only a rank 0 process. To implement this method, initialize W&B (wandb.init), commence a W&B Run, and log metrics (wandb.log) within the rank 0 process. This method is simple and robust, however, this method does not log model metrics from other processes (for example, loss values or inputs from their batches). System metrics, such as usage and memory, are still logged for all GPUs since that information is available to all processes.
Use this method to only track metrics available from a single process. Typical examples include GPU/CPU utilization, behavior on a shared validation set, gradients and parameters, and loss values on representative data examples.
Within our sample Python script (log-ddp.py), we check to see if the rank is 0. To do so, we first launch multiple processes with torch.distributed.launch. Next, we check the rank with the --local_rank command line argument. If the rank is set to 0, we set up wandb logging conditionally in the train() function. Within our Python script, we use the following check:
if __name__ =="__main__":
# Get args args = parse_args()
if args.local_rank ==0: # only on main process# Initialize wandb run run = wandb.init(
entity=args.entity,
project=args.project,
)
# Train model with DDP train(args, run)
else:
train(args)
Explore the W&B App UI to view an example dashboard of metrics tracked from a single process. The dashboard displays system metrics such as temperature and utilization, that were tracked for both GPUs.
However, the loss values as a function epoch and batch size were only logged from a single GPU.
Method 2: Many processes
In this method, we track each process in the job, calling wandb.init() and wandb.log() from each process separately. We suggest you call wandb.finish() at the end of training, to mark that the run has completed so that all processes exit properly.
This method makes more information accessible for logging. However, note that multiple W&B Runs are reported in the W&B App UI. It might be difficult to keep track of W&B Runs across multiple experiments. To mitigate this, provide a value to the group parameter when you initialize W&B to keep track of which W&B Run belongs to a given experiment. For more information about how to keep track of training and evaluation W&B Runs in experiments, see Group Runs.
Use this method if you want to track metrics from individual processes. Typical examples include the data and predictions on each node (for debugging data distribution) and metrics on individual batches outside of the main node. This method is not necessary to get system metrics from all nodes nor to get summary statistics available on the main node.
The following Python code snippet demonstrates how to set the group parameter when you initialize W&B:
if __name__ =="__main__":
# Get args args = parse_args()
# Initialize run run = wandb.init(
entity=args.entity,
project=args.project,
group="DDP", # all runs for the experiment in one group )
# Train model with DDP train(args, run)
Explore the W&B App UI to view an example dashboard of metrics tracked from multiple processes. Note that there are two W&B Runs grouped together in the left sidebar. Click on a group to view the dedicated group page for the experiment. The dedicated group page displays metrics from each process separately.
The preceding image demonstrates the W&B App UI dashboard. On the sidebar we see two experiments. One labeled ’null’ and a second (bound by a yellow box) called ‘DPP’. If you expand the group (select the Group dropdown) you will see the W&B Runs that are associated to that experiment.
Use W&B Service to avoid common distributed training issues
There are two common issues you might encounter when using W&B and distributed training:
Hanging at the beginning of training - A wandb process can hang if the wandb multiprocessing interferes with the multiprocessing from distributed training.
Hanging at the end of training - A training job might hang if the wandb process does not know when it needs to exit. Call the wandb.finish() API at the end of your Python script to tell W&B that the Run finished. The wandb.finish() API will finish uploading data and will cause W&B to exit.
We recommend using the wandb service to improve the reliability of your distributed jobs. Both of the preceding training issues are commonly found in versions of the W&B SDK where wandb service is unavailable.
Enable W&B Service
Depending on your version of the W&B SDK, you might already have W&B Service enabled by default.
W&B SDK 0.13.0 and above
W&B Service is enabled by default for versions of the W&B SDK 0.13.0 and above.
W&B SDK 0.12.5 and above
Modify your Python script to enable W&B Service for W&B SDK version 0.12.5 and above. Use the wandb.require method and pass the string "service" within your main function:
if __name__ =="__main__":
main()
defmain():
wandb.require("service")
# rest-of-your-script-goes-here
For optimal experience we do recommend you upgrade to the latest version.
W&B SDK 0.12.4 and below
Set the WANDB_START_METHOD environment variable to "thread" to use multithreading instead if you use a W&B SDK version 0.12.4 and below.
Example use cases for multiprocessing
The following code snippets demonstrate common methods for advanced distributed use cases.
Spawn process
Use the wandb.setup()[line 8]method in your main function if you initiate a W&B Run in a spawned process:
import multiprocessing as mp
defdo_work(n):
run = wandb.init(config=dict(n=n))
run.log(dict(this=n * n))
defmain():
wandb.setup()
pool = mp.Pool(processes=4)
pool.map(do_work, range(4))
if __name__ =="__main__":
main()
Share a W&B Run
Pass a W&B Run object as an argument to share W&B Runs between processes:
defdo_work(run):
run.log(dict(this=1))
defmain():
run = wandb.init()
p = mp.Process(target=do_work, kwargs=dict(run=run))
p.start()
p.join()
if __name__ =="__main__":
main()
Note that we can not guarantee the logging order. Synchronization should be done by the author of the script.
2.1.6.4 - Log media and objects
Log rich media, from 3D point clouds and molecules to HTML and histograms
We support images, video, audio, and more. Log rich media to explore your results and visually compare your runs, models, and datasets. Read on for examples and how-to guides.
Looking for reference docs for our media types? You want this page.
In order to log media objects with the W&B SDK, you may need to install additional dependencies.
You can install these dependencies by running the following command:
pip install wandb[media]
Images
Log images to track inputs, outputs, filter weights, activations, and more.
Images can be logged directly from NumPy arrays, as PIL images, or from the filesystem.
Each time you log images from a step, we save them to show in the UI. Expand the image panel, and use the step slider to look at images from different steps. This makes it easy to compare how a model’s output changes during training.
It’s recommended to log fewer than 50 images per step to prevent logging from becoming a bottleneck during training and image loading from becoming a bottleneck when viewing results.
We assume the image is gray scale if the last dimension is 1, RGB if it’s 3, and RGBA if it’s 4. If the array contains floats, we convert them to integers between 0 and 255. If you want to normalize your images differently, you can specify the mode manually or just supply a PIL.Image, as described in the “Logging PIL Images” tab of this panel.
For full control over the conversion of arrays to images, construct the PIL.Image yourself and provide it directly.
images = [PIL.Image.fromarray(image) for image in image_array]
wandb.log({"examples": [wandb.Image(image) for image in images]})
For even more control, create images however you like, save them to disk, and provide a filepath.
im = PIL.fromarray(...)
rgb_im = im.convert("RGB")
rgb_im.save("myimage.jpg")
wandb.log({"example": wandb.Image("myimage.jpg")})
Image overlays
Log semantic segmentation masks and interact with them (altering opacity, viewing changes over time, and more) via the W&B UI.
To log an overlay, you’ll need to provide a dictionary with the following keys and values to the masks keyword argument of wandb.Image:
one of two keys representing the image mask:
"mask_data": a 2D NumPy array containing an integer class label for each pixel
"path": (string) a path to a saved image mask file
"class_labels": (optional) a dictionary mapping the integer class labels in the image mask to their readable class names
To log multiple masks, log a mask dictionary with multiple keys, as in the code snippet below.
To log a bounding box, you’ll need to provide a dictionary with the following keys and values to the boxes keyword argument of wandb.Image:
box_data: a list of dictionaries, one for each box. The box dictionary format is described below.
position: a dictionary representing the position and size of the box in one of two formats, as described below. Boxes need not all use the same format.
Option 1:{"minX", "maxX", "minY", "maxY"}. Provide a set of coordinates defining the upper and lower bounds of each box dimension.
Option 2:{"middle", "width", "height"}. Provide a set of coordinates specifying the middle coordinates as [x,y], and width and height as scalars.
class_id: an integer representing the class identity of the box. See class_labels key below.
scores: a dictionary of string labels and numeric values for scores. Can be used for filtering boxes in the UI.
domain: specify the units/format of the box coordinates. Set this to “pixel” if the box coordinates are expressed in pixel space, such as integers within the bounds of the image dimensions. By default, the domain is assumed to be a fraction/percentage of the image, expressed as a floating point number between 0 and 1.
box_caption: (optional) a string to be displayed as the label text on this box
class_labels: (optional) A dictionary mapping class_ids to strings. By default we will generate class labels class_0, class_1, etc.
Check out this example:
class_id_to_label = {
1: "car",
2: "road",
3: "building",
# ...}
img = wandb.Image(
image,
boxes={
"predictions": {
"box_data": [
{
# one box expressed in the default relative/fractional domain"position": {"minX": 0.1, "maxX": 0.2, "minY": 0.3, "maxY": 0.4},
"class_id": 2,
"box_caption": class_id_to_label[2],
"scores": {"acc": 0.1, "loss": 1.2},
# another box expressed in the pixel domain# (for illustration purposes only, all boxes are likely# to be in the same domain/format)"position": {"middle": [150, 20], "width": 68, "height": 112},
"domain": "pixel",
"class_id": 3,
"box_caption": "a building",
"scores": {"acc": 0.5, "loss": 0.7},
# ...# Log as many boxes an as needed }
],
"class_labels": class_id_to_label,
},
# Log each meaningful group of boxes with a unique key name"ground_truth": {
# ... },
},
)
wandb.log({"driving_scene": img})
Image overlays in Tables
To log Segmentation Masks in tables, you will need to provide a wandb.Image object for each row in the table.
If a sequence of numbers, such as a list, array, or tensor, is provided as the first argument, we will construct the histogram automatically by calling np.histogram. All arrays/tensors are flattened. You can use the optional num_bins keyword argument to override the default of 64 bins. The maximum number of bins supported is 512.
In the UI, histograms are plotted with the training step on the x-axis, the metric value on the y-axis, and the count represented by color, to ease comparison of histograms logged throughout training. See the “Histograms in Summary” tab of this panel for details on logging one-off histograms.
wandb.log({"gradients": wandb.Histogram(grads)})
If you want more control, call np.histogram and pass the returned tuple to the np_histogram keyword argument.
If histograms are in your summary they will appear on the Overview tab of the Run Page. If they are in your history, we plot a heatmap of bins over time on the Charts tab.
3D visualizations
Log 3D point clouds and Lidar scenes with bounding boxes. Pass in a NumPy array containing coordinates and colors for the points to render.
boxes is a NumPy array of python dictionaries with three attributes:
corners- a list of eight corners
label- a string representing the label to be rendered on the box (Optional)
color- rgb values representing the color of the box
score - a numeric value that will be displayed on the bounding box that can be used to filter the bounding boxes shown (for example, to only show bounding boxes where score > 0.75). (Optional)
type is a string representing the scene type to render. Currently the only supported value is lidar/beta
Now you can view videos in the media browser. Go to your project workspace, run workspace, or report and click Add visualization to add a rich media panel.
2D view of a molecule
You can log a 2D view of a molecule using the wandb.Image data type and rdkit:
If a numpy array is supplied we assume the dimensions are, in order: time, channels, width, height. By default we create a 4 fps gif image (ffmpeg and the moviepy python library are required when passing numpy objects). Supported formats are "gif", "mp4", "webm", and "ogg". If you pass a string to wandb.Video we assert the file exists and is a supported format before uploading to wandb. Passing a BytesIO object will create a temporary file with the specified format as the extension.
On the W&B Run and Project Pages, you will see your videos in the Media section.
Use wandb.Table to log text in tables to show up in the UI. By default, the column headers are ["Input", "Output", "Expected"]. To ensure optimal UI performance, the default maximum number of rows is set to 10,000. However, users can explicitly override the maximum with wandb.Table.MAX_ROWS = {DESIRED_MAX}.
Custom html can be logged at any key, and this exposes an HTML panel on the run page. By default we inject default styles, you can turn off default styles by passing inject=False.
The following guide describes how to log models to a W&B run and interact with them.
The following APIs are useful for tracking models as a part of your experiment tracking workflow. Use the APIs listed on this page to log models to a run, and to access metrics, tables, media, and other objects.
W&B suggests that you use W&B Artifacts if you want to:
Create and keep track of different versions of serialized data besides models, such as datasets, prompts, and more.
Explore lineage graphs of a model or any other objects tracked in W&B.
Interact with the model artifacts these methods created, such as updating properties (metadata, aliases, and descriptions)
For more information on W&B Artifacts and advanced versioning use cases, see the Artifacts documentation.
Log a model to a run
Use the log_model to log a model artifact that contains content within a directory you specify. The log_model method also marks the resulting model artifact as an output of the W&B run.
You can track a model’s dependencies and the model’s associations if you mark the model as the input or output of a W&B run. View the lineage of the model within the W&B App UI. See the Explore and traverse artifact graphs page within the Artifacts chapter for more information.
Provide the path where your model files are saved to the path parameter. The path can be a local file, directory, or reference URI to an external bucket such as s3://bucket/path.
Ensure to replace values enclosed in <> with your own.
import wandb
# Initialize a W&B runrun = wandb.init(project="<your-project>", entity="<your-entity>")
# Log the modelrun.log_model(path="<path-to-model>", name="<name>")
Optionally provide a name for the model artifact for the name parameter. If name is not specified, W&B will use the basename of the input path prepended with the run ID as the name.
Keep track of the name that you, or W&B assigns, to the model. You will need the name of the model to retrieve the model path with the use_model method.
See log_model in the API Reference guide for more information on possible parameters.
Example: Log a model to a run
import os
import wandb
from tensorflow import keras
from tensorflow.keras import layers
config = {"optimizer": "adam", "loss": "categorical_crossentropy"}
# Initialize a W&B runrun = wandb.init(entity="charlie", project="mnist-experiments", config=config)
# Hyperparametersloss = run.config["loss"]
optimizer = run.config["optimizer"]
metrics = ["accuracy"]
num_classes =10input_shape = (28, 28, 1)
# Training algorithmmodel = keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
# Configure the model for trainingmodel.compile(loss=loss, optimizer=optimizer, metrics=metrics)
# Save modelmodel_filename ="model.h5"local_filepath ="./"full_path = os.path.join(local_filepath, model_filename)
model.save(filepath=full_path)
# Log the model to the W&B runrun.log_model(path=full_path, name="MNIST")
run.finish()
When the user called log_model, a model artifact named MNIST was created and the file model.h5 was added to the model artifact. Your terminal or notebook will print information of where to find information about the run the model was logged to.
View run different-surf-5 at: https://wandb.ai/charlie/mnist-experiments/runs/wlby6fuw
Synced 5 W&B file(s), 0 media file(s), 1 artifact file(s) and0 other file(s)
Find logs at: ./wandb/run-20231206_103511-wlby6fuw/logs
Download and use a logged model
Use the use_model function to access and download models files previously logged to a W&B run.
Provide the name of the model artifact where the model files you are want to retrieve are stored. The name you provide must match the name of an existing logged model artifact.
If you did not define name when originally logged the files with log_model, the default name assigned is the basename of the input path, prepended with the run ID.
Ensure to replace other the values enclosed in <> with your own:
import wandb
# Initialize a runrun = wandb.init(project="<your-project>", entity="<your-entity>")
# Access and download model. Returns path to downloaded artifactdownloaded_model_path = run.use_model(name="<your-model-name>")
The use_model function returns the path of downloaded model files. Keep track of this path if you want to link this model later. In the preceding code snippet, the returned path is stored in a variable called downloaded_model_path.
Example: Download and use a logged model
For example, in the proceeding code snippet a user called the use_model API. They specified the name of the model artifact they want to fetch and they also provided a version/alias. They then stored the path that is returned from the API to the downloaded_model_path variable.
import wandb
entity ="luka"project ="NLP_Experiments"alias ="latest"# semantic nickname or identifier for the model versionmodel_artifact_name ="fine-tuned-model"# Initialize a runrun = wandb.init(project=project, entity=entity)
# Access and download model. Returns path to downloaded artifactdownloaded_model_path = run.use_model(name =f"{model_artifact_name}:{alias}")
See use_model in the API Reference guide for more information on possible parameters and return type.
Log and link a model to the W&B Model Registry
The link_model method is currently only compatible with the legacy W&B Model Registry, which will soon be deprecated. To learn how to link a model artifact to the new edition of model registry, visit the Registry docs.
Use the link_model method to log model files to a W&B run and link it to the W&B Model Registry. If no registered model exists, W&B will create a new one for you with the name you provide for the registered_model_name parameter.
You can think of linking a model similar to ‘bookmarking’ or ‘publishing’ a model to a centralized team repository of models that others members of your team can view and consume.
Note that when you link a model, that model is not duplicated in the Model Registry. That model is also not moved out of the project and intro the registry. A linked model is a pointer to the original model in your project.
Use the Model Registry to organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and automate downstream actions with webhooks or jobs.
A Registered Model is a collection or folder of linked model versions in the Model Registry. Registered models typically represent candidate models for a single modeling use case or task.
The proceeding code snippet shows how to link a model with the link_model API. Ensure to replace other the values enclosed in <> with your own:
import wandb
run = wandb.init(entity="<your-entity>", project="<your-project>")
run.link_model(path="<path-to-model>", registered_model_name="<registered-model-name>")
run.finish()
See link_model in the API Reference guide for more information on optional parameters.
If the registered-model-name matches the name of a registered model that already exists within the Model Registry, the model will be linked to that registered model. If no such registered model exists, a new one will be created and the model will be the first one linked.
For example, suppose you have an existing registered model named “Fine-Tuned-Review-Autocompletion” in your Model Registry (see example here). And suppose that a few model versions are already linked to it: v0, v1, v2. If you call link_model with registered-model-name="Fine-Tuned-Review-Autocompletion", the new model will be linked to this existing registered model as v3. If no registered model with this name exists, a new one will be created and the new model will be linked as v0.
Example: Log and link a model to the W&B Model Registry
For example, the proceeding code snippet logs model files and links the model to a registered model name "Fine-Tuned-Review-Autocompletion".
To do this, a user calls the link_model API. When they call the API, they provide a local filepath that points the content of the model (path) and they provide a name for the registered model to link it to (registered_model_name).
Reminder: A registered model houses a collection of bookmarked model versions.
2.1.6.6 - Log summary metrics
In addition to values that change over time during training, it is often important to track a single value that summarizes a model or a preprocessing step. Log this information in a W&B Run’s summary dictionary. A Run’s summary dictionary can handle numpy arrays, PyTorch tensors or TensorFlow tensors. When a value is one of these types we persist the entire tensor in a binary file and store high level metrics in the summary object, such as min, mean, variance, percentiles, and more.
The last value logged with wandb.log is automatically set as the summary dictionary in a W&B Run. If a summary metric dictionary is modified, the previous value is lost.
The proceeding code snippet demonstrates how to provide a custom summary metric to W&B:
You can update the summary attribute of an existing W&B Run after training has completed. Use the W&B Public API to update the summary attribute:
api = wandb.Api()
run = api.run("username/project/run_id")
run.summary["tensor"] = np.random.random(1000)
run.summary.update()
Customize summary metrics
Custom metric summaries are useful to capture model performance at the best step, instead of the last step, of training in your wandb.summary. For example, you might want to capture the maximum accuracy or the minimum loss value, instead of the final value.
Summary metrics can be controlled using the summary argument in define_metric which accepts the following values: "min", "max", "mean" ,"best", "last" and "none". The "best" parameter can only be used in conjunction with the optional objective argument which accepts values "minimize" and "maximize". Here’s an example of capturing the lowest value of loss and the maximum value of accuracy in the summary, instead of the default summary behavior, which uses the final value from history.
import wandb
import random
random.seed(1)
wandb.init()
# define a metric we are interested in the minimum ofwandb.define_metric("loss", summary="min")
# define a metric we are interested in the maximum ofwandb.define_metric("acc", summary="max")
for i in range(10):
log_dict = {
"loss": random.uniform(0, 1/ (i +1)),
"acc": random.uniform(1/ (i +1), 1),
}
wandb.log(log_dict)
Here’s what the resulting min and max summary values look like, in pinned columns in the sidebar on the Project Page workspace:
To define a Table, specify the columns you want to see for each row of data. Each row might be a single item in your training dataset, a particular step or epoch during training, a prediction made by your model on a test item, an object generated by your model, etc. Each column has a fixed type: numeric, text, boolean, image, video, audio, etc. You do not need to specify the type in advance. Give each column a name, and make sure to only pass data of that type into that column index. For a more detailed example, see this report.
Use the wandb.Table constructor in one of two ways:
List of Rows: Log named columns and rows of data. For example the proceeding code snippet generates a table with two rows and three columns:
Pandas DataFrame: Log a DataFrame using wandb.Table(dataframe=my_df). Column names will be extracted from the DataFrame.
From an existing array or dataframe
# assume a model has returned predictions on four images# with the following fields available:# - the image id# - the image pixels, wrapped in a wandb.Image()# - the model's predicted label# - the ground truth labelmy_data = [
[0, wandb.Image("img_0.jpg"), 0, 0],
[1, wandb.Image("img_1.jpg"), 8, 0],
[2, wandb.Image("img_2.jpg"), 7, 1],
[3, wandb.Image("img_3.jpg"), 1, 1],
]
# create a wandb.Table() with corresponding columnscolumns = ["id", "image", "prediction", "truth"]
test_table = wandb.Table(data=my_data, columns=columns)
Add data
Tables are mutable. As your script executes you can add more data to your table, up to 200,000 rows. There are two ways to add data to a table:
Add a Row: table.add_data("3a", "3b", "3c"). Note that the new row is not represented as a list. If your row is in list format, use the star notation, * ,to expand the list to positional arguments: table.add_data(*my_row_list). The row must contain the same number of entries as there are columns in the table.
Add a Column: table.add_column(name="col_name", data=col_data). Note that the length of col_data must be equal to the table’s current number of rows. Here, col_data can be a list data, or a NumPy NDArray.
Adding data incrementally
This code sample shows how to create and populate a W&B table incrementally. You define the table with predefined columns, including confidence scores for all possible labels, and add data row by row during inference. You can also add data to tables incrementally when resuming runs.
# Define the columns for the table, including confidence scores for each labelcolumns = ["id", "image", "guess", "truth"]
for digit in range(10): # Add confidence score columns for each digit (0-9) columns.append(f"score_{digit}")
# Initialize the table with the defined columnstest_table = wandb.Table(columns=columns)
# Iterate through the test dataset and add data to the table row by row# Each row includes the image ID, image, predicted label, true label, and confidence scoresfor img_id, img in enumerate(mnist_test_data):
true_label = mnist_test_data_labels[img_id] # Ground truth label guess_label = my_model.predict(img) # Predicted label test_table.add_data(
img_id, wandb.Image(img), guess_label, true_label
) # Add row data to the table
Adding data to resumed runs
You can incrementally update a W&B table in resumed runs by loading an existing table from an artifact, retrieving the last row of data, and adding the updated metrics. Then, reinitialize the table for compatibility and log the updated version back to W&B.
# Load the existing table from the artifactbest_checkpt_table = wandb.use_artifact(table_tag).get(table_name)
# Get the last row of data from the table for resumingbest_iter, best_metric_max, best_metric_min = best_checkpt_table.data[-1]
# Update the best metrics as needed# Add the updated data to the tablebest_checkpt_table.add_data(best_iter, best_metric_max, best_metric_min)
# Reinitialize the table with its updated data to ensure compatibilitybest_checkpt_table = wandb.Table(
columns=["col1", "col2", "col3"], data=best_checkpt_table.data
)
# Log the updated table to Weights & Biaseswandb.log({table_name: best_checkpt_table})
Retrieve data
Once data is in a Table, access it by column or by row:
Row Iterator: Users can use the row iterator of Table such as for ndx, row in table.iterrows(): ... to efficiently iterate over the data’s rows.
Get a Column: Users can retrieve a column of data using table.get_column("col_name") . As a convenience, users can pass convert_to="numpy" to convert the column to a NumPy NDArray of primitives. This is useful if your column contains media types such as wandb.Image so that you can access the underlying data directly.
Save tables
After you generate a table of data in your script, for example a table of model predictions, save it to W&B to visualize the results live.
Log a table to a run
Use wandb.log() to save your table to the run, like so:
Each time a table is logged to the same key, a new version of the table is created and stored in the backend. This means you can log the same table across multiple training steps to see how model predictions improve over time, or compare tables across different runs, as long as they’re logged to the same key. You can log up to 200,000 rows.
To log more than 200,000 rows, you can override the limit with:
wandb.Table.MAX_ARTIFACT_ROWS = X
However, this would likely cause performance issues, such as slower queries, in the UI.
Access tables programmatically
In the backend, Tables are persisted as Artifacts. If you are interested in accessing a specific version, you can do so with the artifact API:
with wandb.init() as run:
my_table = run.use_artifact("run-<run-id>-<table-name>:<tag>").get("<table-name>")
For more information on Artifacts, see the Artifacts Chapter in the Developer Guide.
Visualize tables
Any table logged this way will show up in your Workspace on both the Run Page and the Project Page. For more information, see Visualize and Analyze Tables.
Artifact tables
Use artifact.add() to log tables to the Artifacts section of your run instead of the workspace. This could be useful if you have a dataset that you want to log once and then reference for future runs.
run = wandb.init(project="my_project")
# create a wandb Artifact for each meaningful steptest_predictions = wandb.Artifact("mnist_test_preds", type="predictions")
# [build up your predictions data as above]test_table = wandb.Table(data=data, columns=columns)
test_predictions.add(test_table, "my_test_key")
run.log_artifact(test_predictions)
You can join tables you have locally constructed or tables you have retrieved from other artifacts using wandb.JoinedTable(table_1, table_2, join_key).
Args
Description
table_1
(str, wandb.Table, ArtifactEntry) the path to a wandb.Table in an artifact, the table object, or ArtifactEntry
table_2
(str, wandb.Table, ArtifactEntry) the path to a wandb.Table in an artifact, the table object, or ArtifactEntry
join_key
(str, [str, str]) key or keys on which to perform the join
To join two Tables you have logged previously in an artifact context, fetch them from the artifact and join the result into a new Table.
For example, demonstrates how to read one Table of original songs called 'original_songs' and another Table of synthesized versions of the same songs called 'synth_songs'. The proceeding code example joins the two tables on "song_id", and uploads the resulting table as a new W&B Table:
import wandb
run = wandb.init(project="my_project")
# fetch original songs tableorig_songs = run.use_artifact("original_songs:latest")
orig_table = orig_songs.get("original_samples")
# fetch synthesized songs tablesynth_songs = run.use_artifact("synth_songs:latest")
synth_table = synth_songs.get("synth_samples")
# join tables on "song_id"join_table = wandb.JoinedTable(orig_table, synth_table, "song_id")
join_at = wandb.Artifact("synth_summary", "analysis")
# add table to artifact and log to W&Bjoin_at.add(join_table, "synth_explore")
run.log_artifact(join_at)
Read this tutorial for an example on how to combine two previously stored tables stored in different Artifact objects.
We suggest you utilize W&B Artifacts to make it easier to re-use the contents of the CSV file easier to use.
To get started, first import your CSV file. In the proceeding code snippet, replace the iris.csv filename with the name of your CSV filename:
import wandb
import pandas as pd
# Read our CSV into a new DataFramenew_iris_dataframe = pd.read_csv("iris.csv")
Convert the CSV file to a W&B Table to utilize W&B Dashboards.
# Convert the DataFrame into a W&B Tableiris_table = wandb.Table(dataframe=new_iris_dataframe)
Next, create a W&B Artifact and add the table to the Artifact:
# Add the table to an Artifact to increase the row# limit to 200000 and make it easier to reuseiris_table_artifact = wandb.Artifact("iris_artifact", type="dataset")
iris_table_artifact.add(iris_table, "iris_table")
# Log the raw csv file within an artifact to preserve our datairis_table_artifact.add_file("iris.csv")
For more information about W&B Artifacts, see the Artifacts chapter.
Lastly, start a new W&B Run to track and log to W&B with wandb.init:
# Start a W&B run to log datarun = wandb.init(project="tables-walkthrough")
# Log the table to visualize with a run...run.log({"iris": iris_table})
# and Log as an Artifact to increase the available row limit!run.log_artifact(iris_table_artifact)
The wandb.init() API spawns a new background process to log data to a Run, and it synchronizes data to wandb.ai (by default). View live visualizations on your W&B Workspace Dashboard. The following image demonstrates the output of the code snippet demonstration.
The full script with the preceding code snippets is found below:
import wandb
import pandas as pd
# Read our CSV into a new DataFramenew_iris_dataframe = pd.read_csv("iris.csv")
# Convert the DataFrame into a W&B Tableiris_table = wandb.Table(dataframe=new_iris_dataframe)
# Add the table to an Artifact to increase the row# limit to 200000 and make it easier to reuseiris_table_artifact = wandb.Artifact("iris_artifact", type="dataset")
iris_table_artifact.add(iris_table, "iris_table")
# log the raw csv file within an artifact to preserve our datairis_table_artifact.add_file("iris.csv")
# Start a W&B run to log datarun = wandb.init(project="tables-walkthrough")
# Log the table to visualize with a run...run.log({"iris": iris_table})
# and Log as an Artifact to increase the available row limit!run.log_artifact(iris_table_artifact)
# Finish the run (useful in notebooks)run.finish()
Import and log your CSV of Experiments
In some cases, you might have your experiment details in a CSV file. Common details found in such CSV files include:
Configurations needed for your experiment (with the added benefit of being able to utilize our Sweeps Hyperparameter Tuning).
Experiment
Model Name
Notes
Tags
Num Layers
Final Train Acc
Final Val Acc
Training Losses
Experiment 1
mnist-300-layers
Overfit way too much on training data
[latest]
300
0.99
0.90
[0.55, 0.45, 0.44, 0.42, 0.40, 0.39]
Experiment 2
mnist-250-layers
Current best model
[prod, best]
250
0.95
0.96
[0.55, 0.45, 0.44, 0.42, 0.40, 0.39]
Experiment 3
mnist-200-layers
Did worse than the baseline model. Need to debug
[debug]
200
0.76
0.70
[0.55, 0.45, 0.44, 0.42, 0.40, 0.39]
…
…
…
…
…
…
…
Experiment N
mnist-X-layers
NOTES
…
…
…
…
[…, …]
W&B can take CSV files of experiments and convert it into a W&B Experiment Run. The proceeding code snippets and code script demonstrates how to import and log your CSV file of experiments:
To get started, first read in your CSV file and convert it into a Pandas DataFrame. Replace "experiments.csv" with the name of your CSV file:
import wandb
import pandas as pd
FILENAME ="experiments.csv"loaded_experiment_df = pd.read_csv(FILENAME)
PROJECT_NAME ="Converted Experiments"EXPERIMENT_NAME_COL ="Experiment"NOTES_COL ="Notes"TAGS_COL ="Tags"CONFIG_COLS = ["Num Layers"]
SUMMARY_COLS = ["Final Train Acc", "Final Val Acc"]
METRIC_COLS = ["Training Losses"]
# Format Pandas DataFrame to make it easier to work withfor i, row in loaded_experiment_df.iterrows():
run_name = row[EXPERIMENT_NAME_COL]
notes = row[NOTES_COL]
tags = row[TAGS_COL]
config = {}
for config_col in CONFIG_COLS:
config[config_col] = row[config_col]
metrics = {}
for metric_col in METRIC_COLS:
metrics[metric_col] = row[metric_col]
summaries = {}
for summary_col in SUMMARY_COLS:
summaries[summary_col] = row[summary_col]
Next, start a new W&B Run to track and log to W&B with wandb.init():
run = wandb.init(
project=PROJECT_NAME, name=run_name, tags=tags, notes=notes, config=config
)
As an experiment runs, you might want to log every instance of your metrics so they are available to view, query, and analyze with W&B. Use the run.log() command to accomplish this:
run.log({key: val})
You can optionally log a final summary metric to define the outcome of the run. Use the W&B define_metric API to accomplish this. In this example case, we will add the summary metrics to our run with run.summary.update():
Below is the full example script that converts the above sample table into a W&B Dashboard:
FILENAME ="experiments.csv"loaded_experiment_df = pd.read_csv(FILENAME)
PROJECT_NAME ="Converted Experiments"EXPERIMENT_NAME_COL ="Experiment"NOTES_COL ="Notes"TAGS_COL ="Tags"CONFIG_COLS = ["Num Layers"]
SUMMARY_COLS = ["Final Train Acc", "Final Val Acc"]
METRIC_COLS = ["Training Losses"]
for i, row in loaded_experiment_df.iterrows():
run_name = row[EXPERIMENT_NAME_COL]
notes = row[NOTES_COL]
tags = row[TAGS_COL]
config = {}
for config_col in CONFIG_COLS:
config[config_col] = row[config_col]
metrics = {}
for metric_col in METRIC_COLS:
metrics[metric_col] = row[metric_col]
summaries = {}
for summary_col in SUMMARY_COLS:
summaries[summary_col] = row[summary_col]
run = wandb.init(
project=PROJECT_NAME, name=run_name, tags=tags, notes=notes, config=config
)
for key, val in metrics.items():
if isinstance(val, list):
for _val in val:
run.log({key: _val})
else:
run.log({key: val})
run.summary.update(summaries)
run.finish()
2.1.7 - Track Jupyter notebooks
se W&B with Jupyter to get interactive visualizations without leaving your notebook.
Use W&B with Jupyter to get interactive visualizations without leaving your notebook. Combine custom analysis, experiments, and prototypes, all fully logged.
Use cases for W&B with Jupyter notebooks
Iterative experimentation: Run and re-run experiments, tweaking parameters, and have all the runs you do saved automatically to W&B without having to take manual notes along the way.
Code saving: When reproducing a model, it’s hard to know which cells in a notebook ran, and in which order. Turn on code saving on your settings page to save a record of cell execution for each experiment.
Custom analysis: Once runs are logged to W&B, it’s easy to get a dataframe from the API and do custom analysis, then log those results to W&B to save and share in reports.
Getting started in a notebook
Start your notebook with the following code to install W&B and link your account:
After running wandb.init() , start a new cell with %%wandb to see live graphs in the notebook. If you run this cell multiple times, data will be appended to the run.
Rendering live W&B interfaces directly in your notebooks
You can also display any existing dashboards, sweeps, or reports directly in your notebook using the %wandb magic:
# Display a project workspace
%wandb USERNAME/PROJECT
# Display a single run
%wandb USERNAME/PROJECT/runs/RUN_ID
# Display a sweep
%wandb USERNAME/PROJECT/sweeps/SWEEP_ID
# Display a report
%wandb USERNAME/PROJECT/reports/REPORT_ID
# Specify the height of embedded iframe
%wandb USERNAME/PROJECT -h 2048
As an alternative to the %%wandb or %wandb magics, after running wandb.init() you can end any cell with wandb.run to show in-line graphs, or call ipython.display(...) on any report, sweep, or run object returned from our apis.
# Initialize wandb.run firstwandb.init()
# If cell outputs wandb.run, you'll see live graphswandb.run
Easy authentication in Colab: When you call wandb.init for the first time in a Colab, we automatically authenticate your runtime if you’re currently logged in to W&B in your browser. On the overview tab of your run page, you’ll see a link to the Colab.
Jupyter Magic: Display dashboards, sweeps and reports directly in your notebooks. The %wandb magic accepts a path to your project, sweeps or reports and will render the W&B interface directly in the notebook.
Launch dockerized Jupyter: Call wandb docker --jupyter to launch a docker container, mount your code in it, ensure Jupyter is installed, and launch on port 8888.
Run cells in arbitrary order without fear: By default, we wait until the next time wandb.init is called to mark a run as finished. That allows you to run multiple cells (say, one to set up data, one to train, one to test) in whatever order you like and have them all log to the same run. If you turn on code saving in settings, you’ll also log the cells that were executed, in order and in the state in which they were run, enabling you to reproduce even the most non-linear of pipelines. To mark a run as complete manually in a Jupyter notebook, call run.finish.
import wandb
run = wandb.init()
# training script and logging goes hererun.finish()
2.1.8 - Experiments limits and performance
Keep your pages in W&B faster and more responsive by logging within these suggested bounds.
Keep your pages in W&B faster and more responsive by logging within the following suggested bounds.
Logged metrics
Use wandb.log to track experiment metrics. Once logged, these metrics generate charts and show up in tables. Too much logged data can make the app slow.
Distinct metric count
For faster performance, keep the total number of distinct metrics in a project under 10,000.
import wandb
wandb.log(
{
"a": 1, # "a" is a distinct metric"b": {
"c": "hello", # "b.c" is a distinct metric"d": [1, 2, 3], # "b.d" is a distinct metric },
}
)
W&B automatically flattens nested values. This means that if you pass a dictionary, W&B turns it into a dot-separated name. For config values, W&B supports 3 dots in the name. For summary values, W&B supports 4 dots.
If your workspace suddenly slows down, check whether recent runs have unintentionally logged thousands of new metrics. (This is easiest to spot by seeing sections with thousands of plots that have only one or two runs visible on them.) If they have, consider deleting those runs and recreating them with the desired metrics.
Value width
Limit the size of a single logged value to under 1 MB and the total size of a single wandb.log call to under 25 MB. This limit does not apply to wandb.Media types like wandb.Image, wandb.Audio, etc.
# ❌ not recommendedwandb.log({"wide_key": range(10000000)})
# ❌ not recommendedwith f as open("large_file.json", "r"):
large_data = json.load(f)
wandb.log(large_data)
Wide values can affect the plot load times for all metrics in the run, not just the metric with the wide values.
Data is saved and tracked even if you log values wider than the recommended amount. However, your plots may load more slowly.
Metric frequency
Pick a logging frequency that is appropriate to the metric you are logging. As a general rule of thumb, the wider the metric the less frequently you should log it. W&B recommends:
Scalars: <100,000 logged points per metric
Media: <50,000 logged points per metric
Histograms: <10,000 logged points per metric
# Training loop with 1m total stepsfor step in range(1000000):
# ❌ not recommended wandb.log(
{
"scalar": step, # 100,000 scalars"media": wandb.Image(...), # 100,000 images"histogram": wandb.Histogram(...), # 100,000 histograms }
)
# ✅ recommendedif step %1000==0:
wandb.log(
{
"histogram": wandb.Histogram(...), # 10,000 histograms },
commit=False,
)
if step %200==0:
wandb.log(
{
"media": wandb.Image(...), # 50,000 images },
commit=False,
)
if step %100==0:
wandb.log(
{
"scalar": step, # 100,000 scalars },
commit=True,
) # Commit batched, per-step metrics together
W&B continues to accept your logged data but pages may load more slowly if you exceed guidelines.
Config size
Limit the total size of your run config to less than 10 MB. Logging large values could slow down your project workspaces and runs table operations.
# ✅ recommendedwandb.init(
config={
"lr": 0.1,
"batch_size": 32,
"epochs": 4,
}
)
# ❌ not recommendedwandb.init(
config={
"steps": range(10000000),
}
)
# ❌ not recommendedwith f as open("large_config.json", "r"):
large_config = json.load(f)
wandb.init(config=large_config)
Run count
For faster loading times, keep the total number of runs in a single project under 10,000. Large run counts can slow down project workspaces and runs table operations, especially when grouping is enabled or runs have a large count of distinct metrics.
If you find that you or your team are frequently accessing the same set of runs (for example, recent runs), consider bulk moving other runs to a new project used as an archive, leaving a smaller set of runs in your working project.
Section count
Having hundreds of sections in a workspace can hurt performance. Consider creating sections based on high-level groupings of metrics and avoiding an anti-pattern of one section for each metric.
If you find you have too many sections and performance is slow, consider the workspace setting to create sections by prefix rather than suffix, which can result in fewer sections and better performance.
File count
Keep the total number of files uploaded for a single run under 1,000. You can use W&B Artifacts when you need to log a large number of files. Exceeding 1,000 files in a single run can slow down your run pages.
Python script performance
There are a few ways that the performance of your python script is reduced:
The size of your data is too large. Large data sizes could introduce a >1 ms overhead to the training loop.
The speed of your network and how the W&B backend is configured
Calling wandb.log more than a few times per second. This is due to a small latency added to the training loop every time wandb.log is called.
Is frequent logging slowing your training runs down? Check out this Colab for methods to get better performance by changing your logging strategy.
W&B does not assert any limits beyond rate limiting. The W&B Python SDK automatically completes an exponential “backoff” and “retry” requests that exceed limits. W&B Python SDK responds with a “Network failure” on the command line. For unpaid accounts, W&B may reach out in extreme cases where usage exceeds reasonable thresholds.
Rate limits
W&B SaaS Cloud API implements a rate limit to maintain system integrity and ensure availability. This measure prevents any single user from monopolizing available resources in the shared infrastructure, ensuring that the service remains accessible to all users. You may encounter a lower rate limit for a variety of reasons.
Rate limits are subject to change.
Rate limit HTTP headers
The preceding table describes rate limit HTTP headers:
Header name
Description
RateLimit-Limit
The amount of quota available per time window, scaled in the range of 0 to 1000
RateLimit-Remaining
The amount of quota in the current rate limit window, scaled in the range of 0 and 1000
RateLimit-Reset
The number of seconds until the current quota resets
Rate limits on metric logging API
The wandb.log calls in your script utilize a metrics logging API to log your training data to W&B. This API is engaged through either online or offline syncing. In either case, it imposes a rate limit quota limit in a rolling time window. This includes limits on total request size and request rate, where latter refers to the number of requests in a time duration.
W&B applies rate limits per W&B project. So if you have 3 projects in a team, each project has its own rate limit quota. Users on Teams and Enterprise plans have higher rate limits than those on the Free plan.
When you hit the rate limit while using the metrics logging API, you see a relevant message indicating the error in the standard output.
Suggestions for staying under the metrics logging API rate limit
Exceeding the rate limit may delay run.finish() until the rate limit resets. To avoid this, consider the following strategies:
Update your W&B Python SDK version: Ensure you are using the latest version of the W&B Python SDK. The W&B Python SDK is regularly updated and includes enhanced mechanisms for gracefully retrying requests and optimizing quota usage.
Reduce metric logging frequency:
Minimize the frequency of logging metrics to conserve your quota. For example, you can modify your code to log metrics every five epochs instead of every epoch:
if epoch %5==0: # Log metrics every 5 epochs wandb.log({"acc": accuracy, "loss": loss})
Manual data syncing: W&B store your run data locally if you are rate limited. You can manually sync your data with the command wandb sync <run-file-path>. For more details, see the wandb sync reference.
Rate limits on GraphQL API
The W&B Models UI and SDK’s public API make GraphQL requests to the server for querying and modifying data. For all GraphQL requests in SaaS Cloud, W&B applies rate limits per IP address for unauthorized requests and per user for authorized requests. The limit is based on request rate (request per second) within a fixed time window, where your pricing plan determines the default limits. For relevant SDK requests that specify a project path (for example, reports, runs, artifacts), W&B applies rate limits per project, measured by database query time.
Users on Teams and Enterprise plans receive higher rate limits than those on the Free plan.
When you hit the rate limit while using the W&B Models SDK’s public API, you see a relevant message indicating the error in the standard output.
Suggestions for staying under the GraphQL API rate limit
If you are fetching a large volume of data using the W&B Models SDK’s public API, consider waiting at least one second between requests. If you receive a 429 status code or see RateLimit-Remaining=0 in the response headers, wait for the number of seconds specified in RateLimit-Reset before retrying.
Browser considerations
The W&B app can be memory-intensive and performs best in Chrome. Depending on your computer’s memory, having W&B active in 3+ tabs at once can cause performance to degrade. If you encounter unexpectedly slow performance, consider closing other tabs or applications.
Reporting performance issues to W&B
W&B takes performance seriously and investigates every report of lag. To expedite investigation, when reporting slow loading times consider invoking W&B’s built-in performance logger that captures key metrics and performance events. Append &PERF_LOGGING to your URL, and share the output of your console.
2.1.9 - Import and export data
Import data from MLFlow, export or update data that you have saved to W&B
Export data or import data with W&B Public APIs.
This feature requires python>=3.8
Import data from MLFlow
W&B supports importing data from MLFlow, including experiments, runs, artifacts, metrics, and other metadata.
Install dependencies:
# note: this requires py38+pip install wandb[importers]
Log in to W&B. Follow the prompts if you have not logged in before.
By default, importer.collect_runs() collects all runs from the MLFlow server. If you prefer to upload a special subset, you can construct your own runs iterable and pass it to the importer.
import mlflow
from wandb.apis.importers.mlflow import MlflowRun
client = mlflow.tracking.MlflowClient(mlflow_tracking_uri)
runs: Iterable[MlflowRun] = []
for run in mlflow_client.search_runs(...):
runs.append(MlflowRun(run, client))
importer.import_runs(runs)
Set mlflow-tracking-uri="databricks" in the previous step.
To skip importing artifacts, you can pass artifacts=False:
importer.import_runs(runs, artifacts=False)
To import to a specific W&B entity and project, you can pass a Namespace:
from wandb.apis.importers import Namespace
importer.import_runs(runs, namespace=Namespace(entity, project))
Export Data
Use the Public API to export or update data that you have saved to W&B. Before using this API, log data from your script. Check the Quickstart for more details.
Use Cases for the Public API
Export Data: Pull down a dataframe for custom analysis in a Jupyter Notebook. Once you have explored the data, you can sync your findings by creating a new analysis run and logging results, for example: wandb.init(job_type="analysis")
Update Existing Runs: You can update the data logged in association with a W&B run. For example, you might want to update the config of a set of runs to include additional information, like the architecture or a hyperparameter that wasn’t originally logged.
Authenticate your machine with your API key in one of two ways:
Run wandb login on the command line and paste in your API key.
Set the WANDB_API_KEY environment variable to your API key.
Find the run path
To use the Public API, you’ll often need the run path which is <entity>/<project>/<run_id>. In the app UI, open a run page and click the Overview tab to get the run path.
Export Run Data
Download data from a finished or active run. Common usage includes downloading a dataframe for custom analysis in a Jupyter notebook, or using custom logic in an automated environment.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
The most commonly used attributes of a run object are:
Attribute
Meaning
run.config
A dictionary of the run’s configuration information, such as the hyperparameters for a training run or the preprocessing methods for a run that creates a dataset Artifact. Think of these as the run’s inputs.
run.history()
A list of dictionaries meant to store values that change while the model is training such as loss. The command wandb.log() appends to this object.
run.summary
A dictionary of information that summarizes the run’s results. This can be scalars like accuracy and loss, or large files. By default, wandb.log() sets the summary to the final value of a logged time series. The contents of the summary can also be set directly. Think of the summary as the run’s outputs.
You can also modify or update the data of past runs. By default a single instance of an api object will cache all network requests. If your use case requires real time information in a running script, call api.flush() to get updated values.
Understanding the Different Attributes
For the below run
n_epochs =5config = {"n_epochs": n_epochs}
run = wandb.init(project=project, config=config)
for n in range(run.config.get("n_epochs")):
run.log(
{"val": random.randint(0, 1000), "loss": (random.randint(0, 1000) /1000.00)}
)
run.finish()
these are the different outputs for the above run object attributes
run.config
{"n_epochs": 5}
run.history()
_step val loss _runtime _timestamp
00500 0.244 416443454121145 0.521 4164434541222240 0.785 416443454123331 0.305 4164434541244525 0.041 41644345412
The default history method samples the metrics to a fixed number of samples (the default is 500, you can change this with the samples __ argument). If you want to export all of the data on a large run, you can use the run.scan_history() method. For more details see the API Reference.
Querying Multiple Runs
This example script finds a project and outputs a CSV of runs with name, configs and summary stats. Replace <entity> and <project> with your W&B entity and the name of your project, respectively.
import pandas as pd
import wandb
api = wandb.Api()
entity, project ="<entity>", "<project>"runs = api.runs(entity +"/"+ project)
summary_list, config_list, name_list = [], [], []
for run in runs:
# .summary contains output keys/values for# metrics such as accuracy.# We call ._json_dict to omit large files summary_list.append(run.summary._json_dict)
# .config contains the hyperparameters.# We remove special values that start with _. config_list.append({k: v for k, v in run.config.items() ifnot k.startswith("_")})
# .name is the human-readable name of the run. name_list.append(run.name)
runs_df = pd.DataFrame(
{"summary": summary_list, "config": config_list, "name": name_list}
)
runs_df.to_csv("project.csv")
The W&B API also provides a way for you to query across runs in a project with api.runs(). The most common use case is exporting runs data for custom analysis. The query interface is the same as the one MongoDB uses.
Calling api.runs returns a Runs object that is iterable and acts like a list. By default the object loads 50 runs at a time in sequence as required, but you can change the number loaded per page with the per_page keyword argument.
api.runs also accepts an order keyword argument. The default order is -created_at. To order results ascending, specify +created_at. You can also sort by config or summary values. For example, summary.val_acc or config.experiment_name.
Error Handling
If errors occur while talking to W&B servers a wandb.CommError will be raised. The original exception can be introspected via the exc attribute.
Get the latest git commit through the API
In the UI, click on a run and then click the Overview tab on the run page to see the latest git commit. It’s also in the file wandb-metadata.json . Using the public API, you can get the git hash with run.commit.
Get a run’s name and ID during a run
After calling wandb.init() you can access the random run ID or the human readable run name from your script like this:
Unique run ID (8 character hash): wandb.run.id
Random run name (human readable): wandb.run.name
If you’re thinking about ways to set useful identifiers for your runs, here’s what we recommend:
Run ID: leave it as the generated hash. This needs to be unique across runs in your project.
Run name: This should be something short, readable, and preferably unique so that you can tell the difference between different lines on your charts.
Run notes: This is a great place to put a quick description of what you’re doing in your run. You can set this with wandb.init(notes="your notes here")
Run tags: Track things dynamically in run tags, and use filters in the UI to filter your table down to just the runs you care about. You can set tags from your script and then edit them in the UI, both in the runs table and the overview tab of the run page. See the detailed instructions here.
Public API Examples
Export data to visualize in matplotlib or seaborn
Check out our API examples for some common export patterns. You can also click the download button on a custom plot or on the expanded runs table to download a CSV from your browser.
Read metrics from a run
This example outputs timestamp and accuracy saved with wandb.log({"accuracy": acc}) for a run saved to "<entity>/<project>/<run_id>".
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
if run.state =="finished":
for i, row in run.history().iterrows():
print(row["_timestamp"], row["accuracy"])
Filter runs
You can filters by using the MongoDB Query Language.
To pull specific metrics from a run, use the keys argument. The default number of samples when using run.history() is 500. Logged steps that do not include a specific metric will appear in the output dataframe as NaN. The keys argument will cause the API to sample steps that include the listed metric keys more frequently.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
if run.state =="finished":
for i, row in run.history(keys=["accuracy"]).iterrows():
print(row["_timestamp"], row["accuracy"])
Compare two runs
This will output the config parameters that are different between run1 and run2.
import pandas as pd
import wandb
api = wandb.Api()
# replace with your <entity>, <project>, and <run_id>run1 = api.run("<entity>/<project>/<run_id>")
run2 = api.run("<entity>/<project>/<run_id>")
df = pd.DataFrame([run1.config, run2.config]).transpose()
df.columns = [run1.name, run2.name]
print(df[df[run1.name] != df[run2.name]])
Update metrics for a run, after the run has finished
This example sets the accuracy of a previous run to 0.9. It also modifies the accuracy histogram of a previous run to be the histogram of numpy_array.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
run.summary["accuracy"] =0.9run.summary["accuracy_histogram"] = wandb.Histogram(numpy_array)
run.summary.update()
Rename a metric in a run, after the run has finished
This example renames a summary column in your tables.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
run.summary["new_name"] = run.summary["old_name"]
del run.summary["old_name"]
run.summary.update()
Renaming a column only applies to tables. Charts will still refer to metrics by their original names.
Update config for an existing run
This examples updates one of your configuration settings.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
run.config["key"] = updated_value
run.update()
Export system resource consumptions to a CSV file
The snippet below would find the system resource consumptions and then, save them to a CSV.
import wandb
run = wandb.Api().run("<entity>/<project>/<run_id>")
system_metrics = run.history(stream="events")
system_metrics.to_csv("sys_metrics.csv")
Get unsampled metric data
When you pull data from history, by default it’s sampled to 500 points. Get all the logged data points using run.scan_history(). Here’s an example downloading all the loss data points logged in history.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
history = run.scan_history()
losses = [row["loss"] for row in history]
Get paginated data from history
If metrics are being fetched slowly on our backend or API requests are timing out, you can try lowering the page size in scan_history so that individual requests don’t time out. The default page size is 500, so you can experiment with different sizes to see what works best:
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
run.scan_history(keys=sorted(cols), page_size=100)
Export metrics from all runs in a project to a CSV file
This script pulls down the runs in a project and produces a dataframe and a CSV of runs including their names, configs, and summary stats. Replace <entity> and <project> with your W&B entity and the name of your project, respectively.
import pandas as pd
import wandb
api = wandb.Api()
entity, project ="<entity>", "<project>"runs = api.runs(entity +"/"+ project)
summary_list, config_list, name_list = [], [], []
for run in runs:
# .summary contains the output keys/values# for metrics such as accuracy.# We call ._json_dict to omit large files summary_list.append(run.summary._json_dict)
# .config contains the hyperparameters.# We remove special values that start with _. config_list.append({k: v for k, v in run.config.items() ifnot k.startswith("_")})
# .name is the human-readable name of the run. name_list.append(run.name)
runs_df = pd.DataFrame(
{"summary": summary_list, "config": config_list, "name": name_list}
)
runs_df.to_csv("project.csv")
Get the starting time for a run
This code snippet retrieves the time at which the run was created.
import wandb
api = wandb.Api()
run = api.run("entity/project/run_id")
start_time = run.created_at
Upload files to a finished run
The code snippet below uploads a selected file to a finished run.
import wandb
api = wandb.Api()
run = api.run("entity/project/run_id")
run.upload_file("file_name.extension")
Download a file from a run
This finds the file “model-best.h5” associated with run ID uxte44z7 in the cifar project and saves it locally.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
run.file("model-best.h5").download()
Download all files from a run
This finds all files associated with a run and saves them locally.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
for file in run.files():
file.download()
Get runs from a specific sweep
This snippet downloads all the runs associated with a particular sweep.
The best_run is the run with the best metric as defined by the metric parameter in the sweep config.
Download the best model file from a sweep
This snippet downloads the model file with the highest validation accuracy from a sweep with runs that saved model files to model.h5.
import wandb
api = wandb.Api()
sweep = api.sweep("<entity>/<project>/<sweep_id>")
runs = sorted(sweep.runs, key=lambda run: run.summary.get("val_acc", 0), reverse=True)
val_acc = runs[0].summary.get("val_acc", 0)
print(f"Best run {runs[0].name} with {val_acc}% val accuracy")
runs[0].file("model.h5").download(replace=True)
print("Best model saved to model-best.h5")
Delete all files with a given extension from a run
This snippet deletes files with a given extension from a run.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
extension =".png"files = run.files()
for file in files:
if file.name.endswith(extension):
file.delete()
Download system metrics data
This snippet produces a dataframe with all the system resource consumption metrics for a run and then saves it to a CSV.
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
system_metrics = run.history(stream="events")
system_metrics.to_csv("sys_metrics.csv")
Update summary metrics
You can pass a dictionary to update summary metrics.
summary.update({"key": val})
Get the command that ran the run
Each run captures the command that launched it on the run overview page. To pull this command down from the API, you can run:
import wandb
api = wandb.Api()
run = api.run("<entity>/<project>/<run_id>")
meta = json.load(run.file("wandb-metadata.json").download())
program = ["python"] + [meta["program"]] + meta["args"]
2.1.10 - Environment variables
Set W&B environment variables.
When you’re running a script in an automated environment, you can control wandb with environment variables set before the script runs or within the script.
# This is secret and shouldn't be checked into version controlWANDB_API_KEY=$YOUR_API_KEY
# Name and notes optionalWANDB_NAME="My first run"WANDB_NOTES="Smaller learning rate, more regularization."
# Only needed if you don't check in the wandb/settings fileWANDB_ENTITY=$username
WANDB_PROJECT=$project
# If you don't want your script to sync to the cloudos.environ["WANDB_MODE"] ="offline"
Optional environment variables
Use these optional environment variables to do things like set up authentication on remote machines.
Variable name
Usage
WANDB_ANONYMOUS
Set this to allow, never, or must to let users create anonymous runs with secret urls.
WANDB_API_KEY
Sets the authentication key associated with your account. You can find your key on your settings page. This must be set if wandb login hasn’t been run on the remote machine.
WANDB_BASE_URL
If you’re using wandb/local you should set this environment variable to http://YOUR_IP:YOUR_PORT
WANDB_CACHE_DIR
This defaults to ~/.cache/wandb, you can override this location with this environment variable
WANDB_CONFIG_DIR
This defaults to ~/.config/wandb, you can override this location with this environment variable
WANDB_CONFIG_PATHS
Comma separated list of yaml files to load into wandb.config. See config.
WANDB_CONSOLE
Set this to “off” to disable stdout / stderr logging. This defaults to “on” in environments that support it.
WANDB_DIR
Set this to an absolute path to store all generated files here instead of the wandb directory relative to your training script. be sure this directory exists and the user your process runs as can write to it
WANDB_DISABLE_GIT
Prevent wandb from probing for a git repository and capturing the latest commit / diff.
WANDB_DISABLE_CODE
Set this to true to prevent wandb from saving notebooks or git diffs. We’ll still save the current commit if we’re in a git repo.
WANDB_DOCKER
Set this to a docker image digest to enable restoring of runs. This is set automatically with the wandb docker command. You can obtain an image digest by running wandb docker my/image/name:tag --digest
WANDB_ENTITY
The entity associated with your run. If you have run wandb init in the directory of your training script, it will create a directory named wandb and will save a default entity which can be checked into source control. If you don’t want to create that file or want to override the file you can use the environmental variable.
WANDB_ERROR_REPORTING
Set this to false to prevent wandb from logging fatal errors to its error tracking system.
WANDB_HOST
Set this to the hostname you want to see in the wandb interface if you don’t want to use the system provided hostname
WANDB_IGNORE_GLOBS
Set this to a comma separated list of file globs to ignore. These files will not be synced to the cloud.
WANDB_JOB_NAME
Specify a name for any jobs created by wandb.
WANDB_JOB_TYPE
Specify the job type, like “training” or “evaluation” to indicate different types of runs. See grouping for more info.
WANDB_MODE
If you set this to “offline” wandb will save your run metadata locally and not sync to the server. If you set this to disabled wandb will turn off completely.
WANDB_NAME
The human-readable name of your run. If not set it will be randomly generated for you
WANDB_NOTEBOOK_NAME
If you’re running in jupyter you can set the name of the notebook with this variable. We attempt to auto detect this.
WANDB_NOTES
Longer notes about your run. Markdown is allowed and you can edit this later in the UI.
WANDB_PROJECT
The project associated with your run. This can also be set with wandb init, but the environmental variable will override the value.
WANDB_RESUME
By default this is set to never. If set to auto wandb will automatically resume failed runs. If set to must forces the run to exist on startup. If you want to always generate your own unique ids, set this to allow and always set WANDB_RUN_ID.
WANDB_RUN_GROUP
Specify the experiment name to automatically group runs together. See grouping for more info.
WANDB_RUN_ID
Set this to a globally unique string (per project) corresponding to a single run of your script. It must be no longer than 64 characters. All non-word characters will be converted to dashes. This can be used to resume an existing run in cases of failure.
WANDB_SILENT
Set this to true to silence wandb log statements. If this is set all logs will be written to WANDB_DIR/debug.log
WANDB_SHOW_RUN
Set this to true to automatically open a browser with the run url if your operating system supports it.
WANDB_TAGS
A comma separated list of tags to be applied to the run.
WANDB_USERNAME
The username of a member of your team associated with the run. This can be used along with a service account API key to enable attribution of automated runs to members of your team.
WANDB_USER_EMAIL
The email of a member of your team associated with the run. This can be used along with a service account API key to enable attribution of automated runs to members of your team.
Singularity environments
If you’re running containers in Singularity you can pass environment variables by pre-pending the above variables with SINGULARITYENV_. More details about Singularity environment variables can be found here.
Running on AWS
If you’re running batch jobs in AWS, it’s easy to authenticate your machines with your W&B credentials. Get your API key from your settings page, and set the WANDB_API_KEY environment variable in the AWS batch job spec.
2.2 - Sweeps
Hyperparameter search and model optimization with W&B Sweeps
Use W&B Sweeps to automate hyperparameter search and visualize rich, interactive experiment tracking. Pick from popular search methods such as Bayesian, grid search, and random to search the hyperparameter space. Scale and parallelize sweep across one or more machines.
The preceding code snippet, and the colab linked on this page, show how to initialize and create a sweep with wht W&B CLI. See the Sweeps Walkthrough for a step-by-step outline of the W&B Python SDK commands to use to define a sweep configuration, initialize a sweep, and start a sweep.
How to get started
Depending on your use case, explore the following resources to get started with W&B Sweeps:
Read through the sweeps walkthrough for a step-by-step outline of the W&B Python SDK commands to use to define a sweep configuration, initialize a sweep, and start a sweep.
The following sections break down and explains each step in the code sample.
Set up your training code
Define a training function that takes in hyperparameter values from wandb.config and uses them to train a model and return metrics.
Optionally provide the name of the project where you want the output of the W&B Run to be stored (project parameter in wandb.init). If the project is not specified, the run is put in an “Uncategorized” project.
Both the sweep and the run must be in the same project. Therefore, the name you provide when you initialize W&B must match the name of the project you provide when you initialize a sweep.
Define the search space with a sweep configuration
Within a dictionary, specify what hyperparameters you want to sweep over and. For more information about configuration options, see Define sweep configuration.
The proceeding example demonstrates a sweep configuration that uses a random search ('method':'random'). The sweep will randomly select a random set of values listed in the configuration for the batch size, epoch, and the learning rate.
Throughout the sweeps, W&B will maximize the metric specified in the metric key (metric). In the following example, W&B will maximize ('goal':'maximize') the validation accuracy ('val_acc').
W&B uses a Sweep Controller to manage sweeps on the cloud (standard), locally (local) across one or more machines. For more information about Sweep Controllers, see Search and stop algorithms locally.
A sweep identification number is returned when you initialize a sweep:
From the terminal, hit Ctrl+c to stop the run that the Sweep agent is currently running. To kill the agent, hit Ctrl+c again after the run is stopped.
2.2.2 - Add W&B (wandb) to your code
Add W&B to your Python code script or Jupyter Notebook.
There are numerous ways to add the W&B Python SDK to your script or Jupyter Notebook. Outlined below is a “best practice” example of how to integrate the W&B Python SDK into your own code.
Original training script
Suppose you have the following code in a Jupyter Notebook cell or Python script. We define a function called main that mimics a typical training loop. For each epoch, the accuracy and loss is computed on the training and validation data sets. The values are randomly generated for the purpose of this example.
We defined a dictionary called config where we store hyperparameters values (line 15). At the end of the cell, we call the main function to execute the mock training code.
# train.pyimport random
import numpy as np
deftrain_one_epoch(epoch, lr, bs):
acc =0.25+ ((epoch /30) + (random.random() /10))
loss =0.2+ (1- ((epoch -1) /10+ random.random() /5))
return acc, loss
defevaluate_one_epoch(epoch):
acc =0.1+ ((epoch /20) + (random.random() /10))
loss =0.25+ (1- ((epoch -1) /10+ random.random() /6))
return acc, loss
config = {"lr": 0.0001, "bs": 16, "epochs": 5}
defmain():
# Note that we define values from `wandb.config`# instead of defining hard values lr = config["lr"]
bs = config["bs"]
epochs = config["epochs"]
for epoch in np.arange(1, epochs):
train_acc, train_loss = train_one_epoch(epoch, lr, bs)
val_acc, val_loss = evaluate_one_epoch(epoch)
print("epoch: ", epoch)
print("training accuracy:", train_acc, "training loss:", train_loss)
print("validation accuracy:", val_acc, "training loss:", val_loss)
# Call the main function.main()
Training script with W&B Python SDK
The following code examples demonstrate how to add the W&B Python SDK into your code. If you start W&B Sweep jobs in the CLI, you will want to explore the CLI tab. If you start W&B Sweep jobs within a Jupyter notebook or Python script, explore the Python SDK tab.
To create a W&B Sweep, we added the following to the code example:
Line 1: Import the Weights & Biases Python SDK.
Line 6: Create a dictionary object where the key-value pairs define the sweep configuration. In the proceeding example, the batch size (batch_size), epochs (epochs), and the learning rate (lr) hyperparameters are varied during each sweep. For more information on how to create a sweep configuration, see Define sweep configuration.
Line 19: Pass the sweep configuration dictionary to wandb.sweep. This initializes the sweep. This returns a sweep ID (sweep_id). For more information on how to initialize sweeps, see Initialize sweeps.
Line 33: Use the wandb.init() API to generate a background process to sync and log data as a W&B Run.
Line 37-39: (Optional) define values from wandb.config instead of defining hard coded values.
Line 45: Log the metric we want to optimize with wandb.log. You must log the metric defined in your configuration. Within the configuration dictionary (sweep_configuration in this example) we defined the sweep to maximize the val_acc value).
Line 54: Start the sweep with the wandb.agent API call. Provide the sweep ID (line 19), the name of the function the sweep will execute (function=main), and set the maximum number of runs to try to four (count=4). For more information on how to start W&B Sweep, see Start sweep agents.
import wandb
import numpy as np
import random
# Define sweep configsweep_configuration = {
"method": "random",
"name": "sweep",
"metric": {"goal": "maximize", "name": "val_acc"},
"parameters": {
"batch_size": {"values": [16, 32, 64]},
"epochs": {"values": [5, 10, 15]},
"lr": {"max": 0.1, "min": 0.0001},
},
}
# Initialize sweep by passing in config.# (Optional) Provide a name of the project.sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep")
# Define training function that takes in hyperparameter# values from `wandb.config` and uses them to train a# model and return metricdeftrain_one_epoch(epoch, lr, bs):
acc =0.25+ ((epoch /30) + (random.random() /10))
loss =0.2+ (1- ((epoch -1) /10+ random.random() /5))
return acc, loss
defevaluate_one_epoch(epoch):
acc =0.1+ ((epoch /20) + (random.random() /10))
loss =0.25+ (1- ((epoch -1) /10+ random.random() /6))
return acc, loss
defmain():
run = wandb.init()
# note that we define values from `wandb.config`# instead of defining hard values lr = wandb.config.lr
bs = wandb.config.batch_size
epochs = wandb.config.epochs
for epoch in np.arange(1, epochs):
train_acc, train_loss = train_one_epoch(epoch, lr, bs)
val_acc, val_loss = evaluate_one_epoch(epoch)
wandb.log(
{
"epoch": epoch,
"train_acc": train_acc,
"train_loss": train_loss,
"val_acc": val_acc,
"val_loss": val_loss,
}
)
# Start sweep job.wandb.agent(sweep_id, function=main, count=4)
To create a W&B Sweep, we first create a YAML configuration file. The configuration file contains he hyperparameters we want the sweep to explore. In the proceeding example, the batch size (batch_size), epochs (epochs), and the learning rate (lr) hyperparameters are varied during each sweep.
Note that you must provide the name of your Python script for the program key in your YAML file.
Next, we add the following to the code example:
Line 1-2: Import the Wieghts & Biases Python SDK (wandb) and PyYAML (yaml). PyYAML is used to read in our YAML configuration file.
Line 18: Read in the configuration file.
Line 21: Use the wandb.init() API to generate a background process to sync and log data as a W&B Run. We pass the config object to the config parameter.
Line 25 - 27: Define hyperparameter values from wandb.config instead of using hard coded values.
Line 33-39: Log the metric we want to optimize with wandb.log. You must log the metric defined in your configuration. Within the configuration dictionary (sweep_configuration in this example) we defined the sweep to maximize the val_acc value.
import wandb
import yaml
import random
import numpy as np
deftrain_one_epoch(epoch, lr, bs):
acc =0.25+ ((epoch /30) + (random.random() /10))
loss =0.2+ (1- ((epoch -1) /10+ random.random() /5))
return acc, loss
defevaluate_one_epoch(epoch):
acc =0.1+ ((epoch /20) + (random.random() /10))
loss =0.25+ (1- ((epoch -1) /10+ random.random() /6))
return acc, loss
defmain():
# Set up your default hyperparameterswith open("./config.yaml") as file:
config = yaml.load(file, Loader=yaml.FullLoader)
run = wandb.init(config=config)
# Note that we define values from `wandb.config`# instead of defining hard values lr = wandb.config.lr
bs = wandb.config.batch_size
epochs = wandb.config.epochs
for epoch in np.arange(1, epochs):
train_acc, train_loss = train_one_epoch(epoch, lr, bs)
val_acc, val_loss = evaluate_one_epoch(epoch)
wandb.log(
{
"epoch": epoch,
"train_acc": train_acc,
"train_loss": train_loss,
"val_acc": val_acc,
"val_loss": val_loss,
}
)
# Call the main function.main()
Navigate to your CLI. Within your CLI, set a maximum number of runs the sweep agent should try. This is step optional. In the following example we set the maximum number to five.
NUM=5
Next, initialize the sweep with the wandb sweep command. Provide the name of the YAML file. Optionally provide the name of the project for the project flag (--project):
wandb sweep --project sweep-demo-cli config.yaml
This returns a sweep ID. For more information on how to initialize sweeps, see Initialize sweeps.
Copy the sweep ID and replace sweepID in the proceeding code snippet to start the sweep job with the wandb agent command:
For more information on how to start sweep jobs, see Start sweep jobs.
Consideration when logging metrics
Ensure to log the metric you specify in your sweep configuration explicitly to W&B. Do not log metrics for your sweep inside of a sub-directory.
For example, consider the proceeding psuedocode. A user wants to log the validation loss ("val_loss": loss). First they pass the values into a dictionary (line 16). However, the dictionary passed to wandb.log does not explicitly access the key-value pair in the dictionary:
Instead, explicitly access the key-value pair within the Python dictionary. For example, the proceeding code (line after you create a dictionary, specify the key-value pair when you pass the dictionary to the wandb.log method:
Learn how to create configuration files for sweeps.
A W&B Sweep combines a strategy for exploring hyperparameter values with the code that evaluates them. The strategy can be as simple as trying every option or as complex as Bayesian Optimization and Hyperband (BOHB).
Define a sweep configuration either in a Python dictionary or a YAML file. How you define your sweep configuration depends on how you want to manage your sweep.
Define your sweep configuration in a YAML file if you want to initialize a sweep and start a sweep agent from the command line. Define your sweep in a Python dictionary if you initialize a sweep and start a sweep entirely within a Python script or Jupyter notebook.
The following guide describes how to format your sweep configuration. See Sweep configuration options for a comprehensive list of top-level sweep configuration keys.
Basic structure
Both sweep configuration format options (YAML and Python dictionary) utilize key-value pairs and nested structures.
Use top-level keys within your sweep configuration to define qualities of your sweep search such as the name of the sweep (name key), the parameters to search through (parameters key), the methodology to search the parameter space (method key), and more.
For example, the proceeding code snippets show the same sweep configuration defined within a YAML file and within a Python dictionary. Within the sweep configuration there are five top level keys specified: program, name, method, metric and parameters.
Define a sweep configuration in a YAML file if you want to manage sweeps interactively from the command line (CLI)
Within the top level parameters key, the following keys are nested: learning_rate, batch_size, epoch, and optimizer. For each of the nested keys you specify, you can provide one or more values, a distribution, a probability, and more. For more information, see the parameters section in Sweep configuration options.
Double nested parameters
Sweep configurations support nested parameters. To delineate a nested parameter, use an additional parameters key under the top level parameter name. Sweep configs support multi-level nesting.
Specify a probability distribution for your random variables if you use a Bayesian or random hyperparameter search. For each hyperparameter:
Create a top level parameters key in your sweep config.
Within the parameterskey, nest the following:
Specify the name of hyperparameter you want to optimize.
Specify the distribution you want to use for the distribution key. Nest the distribution key-value pair underneath the hyperparameter name.
Specify one or more values to explore. The value (or values) should be inline with the distribution key.
(Optional) Use an additional parameters key under the top level parameter name to delineate a nested parameter.
Nested parameters defined in sweep configuration overwrite keys specified in a W&B run configuration.
For example, suppose you initialize a W&B run with the following configuration in a train.py Python script (see Lines 1-2). Next, you define a sweep configuration in a dictionary called sweep_configuration (see Lines 4-13). You then pass the sweep config dictionary to wandb.sweep to initialize a sweep config (see Line 16).
The nested_param.manual_key that is passed when the W&B run is initialized (line 2) is not accessible. The run.config only possess the key-value pairs that are defined in the sweep configuration dictionary (lines 4-13).
Sweep configuration template
The following template shows how you can configure parameters and specify search constraints. Replace hyperparameter_name with the name of your hyperparameter and any values enclosed in <>.
The proceeding tabs show how to specify common command macros:
Remove the {$interpreter} macro and provide a value explicitly to hardcode the python interpreter. For example, the following code snippet demonstrates how to do this:
If your program does not use argument parsing you can avoid passing arguments all together and take advantage of wandb.init picking up sweep parameters into wandb.config automatically:
command:
- ${env} - ${interpreter} - ${program}
You can change the command to pass arguments the way tools like Hydra expect. See Hydra with W&B for more information.
A sweep configuration consists of nested key-value pairs. Use top-level keys within your sweep configuration to define qualities of your sweep search such as the parameters to search through (parameter key), the methodology to search the parameter space (method key), and more.
The proceeding table lists top-level sweep configuration keys and a brief description. See the respective sections for more information about each key.
Command structure for invoking and passing arguments to the training script
run_cap
Maximum number of runs for this sweep
See the Sweep configuration structure for more information on how to structure your sweep configuration.
metric
Use the metric top-level sweep configuration key to specify the name, the goal, and the target metric to optimize.
Key
Description
name
Name of the metric to optimize.
goal
Either minimize or maximize (Default is minimize).
target
Goal value for the metric you are optimizing. The sweep does not create new runs when if or when a run reaches a target value that you specify. Active agents that have a run executing (when the run reaches the target) wait until the run completes before the agent stops creating new runs.
parameters
In your YAML file or Python script, specify parameters as a top level key. Within the parameters key, provide the name of a hyperparameter you want to optimize. Common hyperparameters include: learning rate, batch size, epochs, optimizers, and more. For each hyperparameter you define in your sweep configuration, specify one or more search constraints.
The proceeding table shows supported hyperparameter search constraints. Based on your hyperparameter and use case, use one of the search constraints below to tell your sweep agent where (in the case of a distribution) or what (value, values, and so forth) to search or use.
Search constraint
Description
values
Specifies all valid values for this hyperparameter. Compatible with grid.
value
Specifies the single valid value for this hyperparameter. Compatible with grid.
distribution
Specify a probability distribution. See the note following this table for information on default values.
probabilities
Specify the probability of selecting each element of values when using random.
min, max
(intor float) Maximum and minimum values. If int, for int_uniform -distributed hyperparameters. If float, for uniform -distributed hyperparameters.
mu
(float) Mean parameter for normal - or lognormal -distributed hyperparameters.
sigma
(float) Standard deviation parameter for normal - or lognormal -distributed hyperparameters.
q
(float) Quantization step size for quantized hyperparameters.
parameters
Nest other parameters inside a root level parameter.
W&B sets the following distributions based on the following conditions if a distribution is not specified:
categorical if you specify values
int_uniform if you specify max and min as integers
uniform if you specify max and min as floats
constant if you provide a set to value
method
Specify the hyperparameter search strategy with the method key. There are three hyperparameter search strategies to choose from: grid, random, and Bayesian search.
Grid search
Iterate over every combination of hyperparameter values. Grid search makes uninformed decisions on the set of hyperparameter values to use on each iteration. Grid search can be computationally costly.
Grid search executes forever if it is searching within in a continuous search space.
Random search
Choose a random, uninformed, set of hyperparameter values on each iteration based on a distribution. Random search runs forever unless you stop the process from the command line, within your python script, or the W&B App UI.
Specify the distribution space with the metric key if you choose random (method: random) search.
Bayesian search
In contrast to random and grid search, Bayesian models make informed decisions. Bayesian optimization uses a probabilistic model to decide which values to use through an iterative process of testing values on a surrogate function before evaluating the objective function. Bayesian search works well for small numbers of continuous parameters but scales poorly. For more information about Bayesian search, see the Bayesian Optimization Primer paper.
Bayesian search runs forever unless you stop the process from the command line, within your python script, or the W&B App UI.
Distribution options for random and Bayesian search
Within the parameter key, nest the name of the hyperparameter. Next, specify the distribution key and specify a distribution for the value.
The proceeding tables lists distributions W&B supports.
Value for distribution key
Description
constant
Constant distribution. Must specify the constant value (value) to use.
categorical
Categorical distribution. Must specify all valid values (values) for this hyperparameter.
int_uniform
Discrete uniform distribution on integers. Must specify max and min as integers.
uniform
Continuous uniform distribution. Must specify max and min as floats.
q_uniform
Quantized uniform distribution. Returns round(X / q) * q where X is uniform. q defaults to 1.
log_uniform
Log-uniform distribution. Returns a value X between exp(min) and exp(max)such that the natural logarithm is uniformly distributed between min and max.
log_uniform_values
Log-uniform distribution. Returns a value X between min and max such that log(X) is uniformly distributed between log(min) and log(max).
q_log_uniform
Quantized log uniform. Returns round(X / q) * q where X is log_uniform. q defaults to 1.
q_log_uniform_values
Quantized log uniform. Returns round(X / q) * q where X is log_uniform_values. q defaults to 1.
inv_log_uniform
Inverse log uniform distribution. Returns X, where log(1/X) is uniformly distributed between min and max.
inv_log_uniform_values
Inverse log uniform distribution. Returns X, where log(1/X) is uniformly distributed between log(1/max) and log(1/min).
normal
Normal distribution. Return value is normally distributed with mean mu (default 0) and standard deviation sigma (default 1).
q_normal
Quantized normal distribution. Returns round(X / q) * q where X is normal. Q defaults to 1.
log_normal
Log normal distribution. Returns a value X such that the natural logarithm log(X) is normally distributed with mean mu (default 0) and standard deviation sigma (default 1).
q_log_normal
Quantized log normal distribution. Returns round(X / q) * q where X is log_normal. q defaults to 1.
early_terminate
Use early termination (early_terminate) to stop poorly performing runs. If early termination occurs, W&B stops the current run before it creates a new run with a new set of hyperparameter values.
You must specify a stopping algorithm if you use early_terminate. Nest the type key within early_terminate within your sweep configuration.
Stopping algorithm
W&B currently supports Hyperband stopping algorithm.
Hyperband hyperparameter optimization evaluates if a program should stop or if it should to continue at one or more pre-set iteration counts, called brackets.
When a W&B run reaches a bracket, the sweep compares that run’s metric to all previously reported metric values. The sweep terminates the run if the run’s metric value is too high (when the goal is minimization) or if the run’s metric is too low (when the goal is maximization).
Brackets are based on the number of logged iterations. The number of brackets corresponds to the number of times you log the metric you are optimizing. The iterations can correspond to steps, epochs, or something in between. The numerical value of the step counter is not used in bracket calculations.
Specify either min_iter or max_iter to create a bracket schedule.
Key
Description
min_iter
Specify the iteration for the first bracket
max_iter
Specify the maximum number of iterations.
s
Specify the total number of brackets (required for max_iter)
eta
Specify the bracket multiplier schedule (default: 3).
strict
Enable ‘strict’ mode that prunes runs aggressively, more closely following the original Hyperband paper. Defaults to false.
Hyperband checks which W&B runs to end once every few minutes. The end run timestamp might differ from the specified brackets if your run or iteration are short.
command
Modify the format and contents with nested values within the command key. You can directly include fixed components such as filenames.
On Unix systems, /usr/bin/env ensures that the OS chooses the correct Python interpreter based on the environment.
W&B supports the following macros for variable components of the command:
Command macro
Description
${env}
/usr/bin/env on Unix systems, omitted on Windows.
${interpreter}
Expands to python.
${program}
Training script filename specified by the sweep configuration program key.
${args}
Hyperparameters and their values in the form --param1=value1 --param2=value2.
${args_no_boolean_flags}
Hyperparameters and their values in the form --param1=value1 except boolean parameters are in the form --boolean_flag_param when True and omitted when False.
${args_no_hyphens}
Hyperparameters and their values in the form param1=value1 param2=value2.
${args_json}
Hyperparameters and their values encoded as JSON.
${args_json_file}
The path to a file containing the hyperparameters and their values encoded as JSON.
${envvar}
A way to pass environment variables. ${envvar:MYENVVAR} __ expands to the value of MYENVVAR environment variable. __
2.2.4 - Initialize a sweep
Initialize a W&B Sweep
W&B uses a Sweep Controller to manage sweeps on the cloud (standard), locally (local) across one or more machines. After a run completes, the sweep controller will issue a new set of instructions describing a new run to execute. These instructions are picked up by agents who actually perform the runs. In a typical W&B Sweep, the controller lives on the W&B server. Agents live on your machines.
The following code snippets demonstrate how to initialize sweeps with the CLI and within a Jupyter Notebook or Python script.
Before you initialize a sweep, make sure you have a sweep configuration defined either in a YAML file or a nested Python dictionary object in your script. For more information see, Define sweep configuration.
Both the W&B Sweep and the W&B Run must be in the same project. Therefore, the name you provide when you initialize W&B (wandb.init) must match the name of the project you provide when you initialize a W&B Sweep (wandb.sweep).
Use the W&B SDK to initialize a sweep. Pass the sweep configuration dictionary to the sweep parameter. Optionally provide the name of the project for the project parameter (project) where you want the output of the W&B Run to be stored. If the project is not specified, the run is put in an “Uncategorized” project.
The wandb.sweep function returns the sweep ID. The sweep ID includes the entity name and the project name. Make a note of the sweep ID.
Use the W&B CLI to initialize a sweep. Provide the name of your configuration file. Optionally provide the name of the project for the project flag. If the project is not specified, the W&B Run is put in an “Uncategorized” project.
Use the wandb sweep command to initialize a sweep. The proceeding code example initializes a sweep for a sweeps_demo project and uses a config.yaml file for the configuration.
wandb sweep --project sweeps_demo config.yaml
This command will print out a sweep ID. The sweep ID includes the entity name and the project name. Make a note of the sweep ID.
2.2.5 - Start or stop a sweep agent
Start or stop a W&B Sweep Agent on one or more machines.
Start a W&B Sweep on one or more agents on one or more machines. W&B Sweep agents query the W&B server you launched when you initialized a W&B Sweep (wandb sweep) for hyperparameters and use them to run model training.
To start a W&B Sweep agent, provide the W&B Sweep ID that was returned when you initialized a W&B Sweep. The W&B Sweep ID has the form:
entity/project/sweep_ID
Where:
entity: Your W&B username or team name.
project: The name of the project where you want the output of the W&B Run to be stored. If the project is not specified, the run is put in an “Uncategorized” project.
sweep_ID: The pseudo random, unique ID generated by W&B.
Provide the name of the function the W&B Sweep will execute if you start a W&B Sweep agent within a Jupyter Notebook or Python script.
The proceeding code snippets demonstrate how to start an agent with W&B. We assume you already have a configuration file and you have already initialized a W&B Sweep. For more information about how to define a configuration file, see Define sweep configuration.
Use the wandb agent command to start a sweep. Provide the sweep ID that was returned when you initialized the sweep. Copy and paste the code snippet below and replace sweep_id with your sweep ID:
wandb agent sweep_id
Use the W&B Python SDK library to start a sweep. Provide the sweep ID that was returned when you initialized the sweep. In addition, provide the name of the function the sweep will execute.
Random and Bayesian searches will run forever. You must stop the process from the command line, within your python script, or the Sweeps UI.
Optionally specify the number of W&B Runs a Sweep agent should try. The following code snippets demonstrate how to set a maximum number of W&B Runs with the CLI and within a Jupyter Notebook, Python script.
First, initialize your sweep. For more information, see Initialize sweeps.
sweep_id = wandb.sweep(sweep_config)
Next, start the sweep job. Provide the sweep ID generated from sweep initiation. Pass an integer value to the count parameter to set the maximum number of runs to try.
If you start a new run after the sweep agent has finished, within the same script or notebook, then you should call wandb.teardown() before starting the new run.
Parallelize W&B Sweep agents on multi-core or multi-GPU machine.
Parallelize your W&B Sweep agents on a multi-core or multi-GPU machine. Before you get started, ensure you have initialized your W&B Sweep. For more information on how to initialize a W&B Sweep, see Initialize sweeps.
Parallelize on a multi-CPU machine
Depending on your use case, explore the proceeding tabs to learn how to parallelize W&B Sweep agents using the CLI or within a Jupyter Notebook.
Use the wandb agent command to parallelize your W&B Sweep agent across multiple CPUs with the terminal. Provide the sweep ID that was returned when you initialized the sweep.
Open more than one terminal window on your local machine.
Copy and paste the code snippet below and replace sweep_id with your sweep ID:
wandb agent sweep_id
Use the W&B Python SDK library to parallelize your W&B Sweep agent across multiple CPUs within Jupyter Notebooks. Ensure you have the sweep ID that was returned when you initialized the sweep. In addition, provide the name of the function the sweep will execute for the function parameter:
Open more than one Jupyter Notebook.
Copy and past the W&B Sweep ID on multiple Jupyter Notebooks to parallelize a W&B Sweep. For example, you can paste the following code snippet on multiple jupyter notebooks to paralleliz your sweep if you have the sweep ID stored in a variable called sweep_id and the name of the function is function_name:
Follow the procedure outlined to parallelize your W&B Sweep agent across multiple GPUs with a terminal using CUDA Toolkit:
Open more than one terminal window on your local machine.
Specify the GPU instance to use with CUDA_VISIBLE_DEVICES when you start a W&B Sweep job (wandb agent). Assign CUDA_VISIBLE_DEVICES an integer value corresponding to the GPU instance to use.
For example, suppose you have two NVIDIA GPUs on your local machine. Open a terminal window and set CUDA_VISIBLE_DEVICES to 0 (CUDA_VISIBLE_DEVICES=0). Replace sweep_ID in the proceeding example with the W&B Sweep ID that is returned when you initialized a W&B Sweep:
Terminal 1
CUDA_VISIBLE_DEVICES=0 wandb agent sweep_ID
Open a second terminal window. Set CUDA_VISIBLE_DEVICES to 1 (CUDA_VISIBLE_DEVICES=1). Paste the same W&B Sweep ID for the sweep_ID mentioned in the proceeding code snippet:
Terminal 2
CUDA_VISIBLE_DEVICES=1 wandb agent sweep_ID
2.2.7 - Visualize sweep results
Visualize the results of your W&B Sweeps with the W&B App UI.
Visualize the results of your W&B Sweeps with the W&B App UI. Navigate to the W&B App UI at https://wandb.ai/home. Choose the project that you specified when you initialized a W&B Sweep. You will be redirected to your project workspace. Select the Sweep icon on the left panel (broom icon). From the Sweep UI, select the name of your Sweep from the list.
By default, W&B will automatically create a parallel coordinates plot, a parameter importance plot, and a scatter plot when you start a W&B Sweep job.
Parallel coordinates charts summarize the relationship between large numbers of hyperparameters and model metrics at a glance. For more information on parallel coordinates plots, see Parallel coordinates.
The scatter plot(left) compares the W&B Runs that were generated during the Sweep. For more information about scatter plots, see Scatter Plots.
The parameter importance plot(right) lists the hyperparameters that were the best predictors of, and highly correlated to desirable values of your metrics. For more information parameter importance plots, see Parameter Importance.
You can alter the dependent and independent values (x and y axis) that are automatically used. Within each panel there is a pencil icon called Edit panel. Choose Edit panel. A model will appear. Within the modal, you can alter the behavior of the graph.
For more information on all default W&B visualization options, see Panels. See the Data Visualization docs for information on how to create plots from W&B Runs that are not part of a W&B Sweep.
2.2.8 - Manage sweeps with the CLI
Pause, resume, and cancel a W&B Sweep with the CLI.
Pause, resume, and cancel a W&B Sweep with the CLI. Pausing a W&B Sweep tells the W&B agent that new W&B Runs should not be executed until the Sweep is resumed. Resuming a Sweep tells the agent to continue executing new W&B Runs. Stopping a W&B Sweep tells the W&B Sweep agent to stop creating or executing new W&B Runs. Cancelling a W&B Sweep tells the Sweep agent to kill currently executing W&B Runs and stop executing new Runs.
In each case, provide the W&B Sweep ID that was generated when you initialized a W&B Sweep. Optionally open a new terminal window to execute the proceeding commands. A new terminal window makes it easier to execute a command if a W&B Sweep is printing output statements to your current terminal window.
Use the following guidance to pause, resume, and cancel sweeps.
Pause sweeps
Pause a W&B Sweep so it temporarily stops executing new W&B Runs. Use the wandb sweep --pause command to pause a W&B Sweep. Provide the W&B Sweep ID that you want to pause.
wandb sweep --pause entity/project/sweep_ID
Resume sweeps
Resume a paused W&B Sweep with the wandb sweep --resume command. Provide the W&B Sweep ID that you want to resume:
wandb sweep --resume entity/project/sweep_ID
Stop sweeps
Finish a W&B sweep to stop executing newW&B Runs and let currently executing Runs finish.
wandb sweep --stop entity/project/sweep_ID
Cancel sweeps
Cancel a sweep to kill all running runs and stop running new runs. Use the wandb sweep --cancel command to cancel a W&B Sweep. Provide the W&B Sweep ID that you want to cancel.
wandb sweep --cancel entity/project/sweep_ID
For a full list of CLI command options, see the wandb sweep CLI Reference Guide.
Pause, resume, stop, and cancel a sweep across multiple agents
Pause, resume, stop, or cancel a W&B Sweep across multiple agents from a single terminal. For example, suppose you have a multi-core machine. After you initialize a W&B Sweep, you open new terminal windows and copy the Sweep ID to each new terminal.
Within any terminal, use the wandb sweep CLI command to pause, resume, stop, or cancel a W&B Sweep. For example, the proceeding code snippet demonstrates how to pause a W&B Sweep across multiple agents with the CLI:
wandb sweep --pause entity/project/sweep_ID
Specify the --resume flag along with the Sweep ID to resume the Sweep across your agents:
wandb sweep --resume entity/project/sweep_ID
For more information on how to parallelize W&B agents, see Parallelize agents.
Description: We examine agents trained with different side effect penalties on three different tasks: pattern creation, pattern removal, and navigation.
Description: How do we distinguish signal from pareidolia (imaginary patterns)? This article is showcases what is possible with W&B and aims to inspire further exploration.
Description: Explore why hyperparameter optimization matters and look at three algorithms to automate hyperparameter tuning for your machine learning models.
selfm-anaged
The following how-to-guide demonstrates how to solve real-world problems with W&B:
Description: How to use W&B Sweeps for hyperparameter tuning using XGBoost.
Sweep GitHub repository
W&B advocates open source and welcome contributions from the community. Find the GitHub repository at https://github.com/wandb/sweeps. For information on how to contribute to the W&B open source repo, see the W&B GitHub Contribution guidelines.
2.2.10 - Manage algorithms locally
Search and stop algorithms locally instead of using the W&B cloud-hosted service.
The hyper-parameter controller is hosted by Weights & Biased as a cloud service by default. W&B agents communicate with the controller to determine the next set of parameters to use for training. The controller is also responsible for running early stopping algorithms to determine which runs can be stopped.
The local controller feature allows the user to commence search and stop algorithms locally. The local controller gives the user the ability to inspect and instrument the code in order to debug issues as well as develop new features which can be incorporated into the cloud service.
This feature is offered to support faster development and debugging of new algorithms for the Sweeps tool. It is not intended for actual hyperparameter optimization workloads.
Before you get start, you must install the W&B SDK(wandb). Type the following code snippet into your command line:
pip install wandb sweeps
The following examples assume you already have a configuration file and a training loop defined in a python script or Jupyter Notebook. For more information about how to define a configuration file, see Define sweep configuration.
Run the local controller from the command line
Initialize a sweep similarly to how you normally would when you use hyper-parameter controllers hosted by W&B as a cloud service. Specify the controller flag (controller) to indicate you want to use the local controller for W&B sweep jobs:
wandb sweep --controller config.yaml
Alternatively, you can separate initializing a sweep and specifying that you want to use a local controller into two steps.
To separate the steps, first add the following key-value to your sweep’s YAML configuration file:
controller:
type: local
Next, initialize the sweep:
wandb sweep config.yaml
After you initialized the sweep, start a controller with wandb controller:
# wandb sweep command will print a sweep_idwandb controller {user}/{entity}/{sweep_id}
Once you have specified you want to use a local controller, start one or more Sweep agents to execute the sweep. Start a W&B Sweep similar to how you normally would. See Start sweep agents, for more information.
wandb sweep sweep_ID
Run a local controller with W&B Python SDK
The following code snippets demonstrate how to specify and use a local controller with the W&B Python SDK.
The simplest way to use a controller with the Python SDK is to pass the sweep ID to the wandb.controller method. Next, use the return objects run method to start the sweep job:
Troubleshoot common error messages with the guidance suggested.
CommError, Run does not exist and ERROR Error uploading
Your W&B Run ID might be defined if these two error messages are both returned. As an example, you might have a similar code snippet defined somewhere in your Jupyter Notebooks or Python script:
wandb.init(id="some-string")
You can not set a Run ID for W&B Sweeps because W&B automatically generates random, unique IDs for Runs created by W&B Sweeps.
W&B Run IDs need to be unique within a project.
We recommend you pass a name to the name parameter when you initialized W&B, if you want to set a custom name that will appear on tables and graphs. For example:
wandb.init(name="a helpful readable run name")
Cuda out of memory
Refactor your code to use process-based executions if you see this error message. More specifically, rewrite your code to a Python script. In addition, call the W&B Sweep Agent from the CLI, instead of the W&B Python SDK.
As an example, suppose you rewrite your code to a Python script called train.py. Add the name of the training script (train.py) to your YAML Sweep configuration file (config.yaml in this example):
Next, add the following to your train.py Python script:
if _name_ =="_main_":
train()
Navigate to your CLI and initialize a W&B Sweep with wandb sweep:
wandb sweep config.yaml
Make a note of the W&B Sweep ID that is returned. Next, start the Sweep job with wandb agent with the CLI instead of the Python SDK (wandb.agent). Replace sweep_ID in the code snippet below with the Sweep ID that was returned in the previous step:
wandb agent sweep_ID
anaconda 400 error
The following error usually occurs when you do not log the metric that you are optimizing:
wandb: ERROR Error while calling W&B API: anaconda 400 error:
{"code": 400, "message": "TypeError: bad operand type for unary -: 'NoneType'"}
Within your YAML file or nested dictionary you specify a key named “metric” to optimize. Ensure that you log (wandb.log) this metric. In addition, ensure you use the exact metric name that you defined the sweep to optimize within your Python script or Jupyter Notebook. For more information about configuration files, see Define sweep configuration.
2.2.12 - Sweeps UI
Describes the different components of the Sweeps UI.
The state (State), creation time (Created), the entity that started the sweep (Creator), the number of runs completed (Run count), and the time it took to compute the sweep (Compute time) are displayed in the Sweeps UI. The expected number of runs a sweep will create (Est. Runs) is provided when you do a grid search over a discrete search space. You can also click on a sweep to pause, resume, stop, or kill the sweep from the interface.
2.2.13 - Tutorial: Create sweep job from project
Tutorial on how to create sweep jobs from a pre-existing W&B project.
First, create a baseline. Download the PyTorch MNIST dataset example model from W&B examples GitHub repository. Next, train the model. The training script is within the examples/pytorch/pytorch-cnn-fashion directory.
Clone this repo git clone https://github.com/wandb/examples.git
Open this example cd examples/pytorch/pytorch-cnn-fashion
Run a run manually python train.py
Optionally explore the example appear in the W&B App UI dashboard.
From your project page, open the Sweep tab in the sidebar and select Create Sweep.
The auto-generated configuration guesses values to sweep over based on the runs you have completed. Edit the configuration to specify what ranges of hyperparameters you want to try. When you launch the sweep, it starts a new process on the hosted W&B sweep server. This centralized service coordinates the agents— the machines that are running the training jobs.
3. Launch agents
Next, launch an agent locally. You can launch up to 20 agents on different machines in parallel if you want to distribute the work and finish the sweep job more quickly. The agent will print out the set of parameters it’s trying next.
Now you’re running a sweep. The following image demonstrates what the dashboard looks like as the example sweep job is running. View an example project page →
Seed a new sweep with existing runs
Launch a new sweep using existing runs that you’ve previously logged.
Open your project table.
Select the runs you want to use with checkboxes on the left side of the table.
Click the dropdown to create a new sweep.
Your sweep will now be set up on our server. All you need to do is launch one or more agents to start running runs.
If you kick off the new sweep as a bayesian sweep, the selected runs will also seed the Gaussian Process.
W&B Registry is now in public preview. Visit this section to learn how to enable it for your deployment type.
W&B Registry is a curated central repository of artifact versions within your organization. Users who have permission within your organization can download, share, and collaboratively manage the lifecycle of all artifacts, regardless of the team that user belongs to.
Each registry consists of one or more collections. Each collection represents a distinct task or use case.
To add an artifact to a registry, you first log a specific artifact version to W&B. Each time you log an artifact, W&B automatically assigns a version to that artifact. Artifact versions use 0 indexing, so the first version is v0, the second version is v1, and so on.
Once you log an artifact to W&B, you can then link that specific artifact version to a collection in the registry.
The term “link” refers to pointers that connect where W&B stores the artifact and where the artifact is accessible in the registry. W&B does not duplicate artifacts when you link an artifact to a collection.
As an example, the proceeding code example shows how to log and link a fake model artifact called “my_model.txt” to a collection named “first-collection” in the core Model registry. More specifically, the code accomplishes the following:
Initialize a W&B run.
Log the artifact to W&B.
Specify the name of the collection and registry you want to link your artifact version to.
Link the artifact to the collection.
Copy and paste the proceeding code snippet into a Python script and run it. Ensure that you have W&B Python SDK version 0.18.6 or greater.
import wandb
import random
# Initialize a W&B run to track the artifactrun = wandb.init(project="registry_quickstart")
# Create a simulated model file so that you can log itwith open("my_model.txt", "w") as f:
f.write("Model: "+ str(random.random()))
# Log the artifact to W&Blogged_artifact = run.log_artifact(
artifact_or_path="./my_model.txt",
name="gemma-finetuned",
type="model"# Specifies artifact type)
# Specify the name of the collection and registry# you want to publish the artifact toCOLLECTION_NAME ="first-collection"REGISTRY_NAME ="model"# Link the artifact to the registryrun.link_artifact(
artifact=logged_artifact,
target_path=f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}")
W&B automatically creates a collection for you if the collection you specify in the returned run object’s link_artifact(target_path = "") method does not exist within the registry you specify.
The URL that your terminal prints directs you to the project where W&B stores your artifact.
Navigate to the Registry App to view artifact versions that you and other members of your organization publish. To do so, first navigate to W&B. Select Registry in the left sidebar below Applications. Select the “Model” registry. Within the registry, you should see the “first-collection” collection with your linked artifact version.
Once you link an artifact version to a collection within a registry, members of your organization can view, download, and manage your artifact versions, create downstream automations, and more if they have the proper permissions.
Enable W&B Registry
Based on your deployment type, satisfy the following conditions to enable W&B Registry:
Deployment type
How to enable
Multi-tenant Cloud
No action required. W&B Registry is available on the W&B App.
Dedicated Cloud
Contact your account team. The Solutions Architect (SA) Team enables W&B Registry within your instance’s operator console. Ensure your instance is on server release version 0.59.2 or newer.
Self-Managed
Enable the environment variable called ENABLE_REGISTRY_UI. To learn more about enabling environment variables in server, visit these docs. In self-managed instances, your infrastructure administrator should enable this environment variable and set it to true. Ensure your instance is on server release version 0.59.2 or newer.
Resources to get started
Depending on your use case, explore the following resources to get started with the W&B Registry:
Use W&B Registry to manage and version your artifacts, track lineage, and promote models through different lifecycle stages.
Automate your model management workflows using webhooks.
Integrate the registry with external ML systems and tools for model evaluation, monitoring, and deployment.
Migrate from the legacy Model Registry to W&B Registry
The legacy Model Registry is scheduled for deprecation with the exact date not yet decided. Before deprecating the legacy Model Registry, W&B will migrate the contents of the legacy Model Registry to the W&B Registry.
Until the migration occurs, W&B supports both the legacy Model Registry and the new Registry.
To view the legacy Model Registry, navigate to the Model Registry in the W&B App. A banner appears at the top of the page that enables you to use the legacy Model Registry App UI.
Reach out to support@wandb.com with any questions or to speak to the W&B Product Team about any concerns about the migration.
A core registry is a template for specific use cases: Models and Datasets.
By default, the Models registry is configured to accept "model" artifact types and the Dataset registry is configured to accept "dataset" artifact types. An admin can add additional accepted artifact types.
The preceding image shows the Models and the Dataset core registry along with a custom registry called Fine_Tuned_Models in the W&B Registry App UI.
A core registry has organization visibility. A registry admin can not change the visibility of a core registry.
Custom registry
Custom registries are not restricted to "model" artifact types or "dataset" artifact types.
You can create a custom registry for each step in your machine learning pipeline, from initial data collection to final model deployment.
For example, you might create a registry called “Benchmark_Datasets” for organizing curated datasets to evaluate the performance of trained models. Within this registry, you might have a collection called “User_Query_Insurance_Answer_Test_Data” that contains a set of user questions and corresponding expert-validated answers that the model has never seen during training.
A custom registry can have either organization or restricted visibility. A registry admin can change the visibility of a custom registry from organization to restricted. However, the registry admin can not change a custom registry’s visibility from restricted to organizational visibility.
Custom registries are particularly useful for organizing project-specific requirements that differ from the default, core registry.
The following procedure describes how to interactively create a registry:
Navigate to the Registry App in the W&B App UI.
Within Custom registry, click on the Create registry button.
Provide a name for your registry in the Name field.
Optionally provide a description about the registry.
Select who can view the registry from the Registry visibility dropdown. See Registry visibility types for more information on registry visibility options.
Select either All types or Specify types from the Accepted artifacts type dropdown.
(If you select Specify types) Add one or more artifact types that your registry accepts.
An artifact type can not be removed from a registry once it is added and saved in the registry’s settings.
8. Click on the **Create registry** button.
For example, the preceding image shows a custom registry called “Fine_Tuned_Models” that a user is about to create. The registry is set to Restricted which means that only members that are manually added to the “Fine_Tuned_Models” registry will have access to this registry.
2.3.3 - Configure registry access
Registry admins can limit who can access a registry by navigating to a registry’s settings and assigning a user’s role to Admin, Member, or Viewer. Users can have different roles in different registries. For example, a user can have a view role in “Registry A” and a member role in the “Registry B”.
Your role in a team has no impact or relationship on your role in any registry.
The proceeding table lists the different roles a user can have and their permissions:
Permission
Permission Group
Viewer
Member
Admin
Owner
View a collection’s details
Read
X
X
X
X
View a linked artifact’s details
Read
X
X
X
X
Usage: Consume an artifact in a registry with use_artifact
Read
X
X
X
X
Download a linked artifact
Read
X
X
X
X
Download files from an artifact’s file viewer
Read
X
X
X
X
Search a registry
Read
X
X
X
X
View a registry’s settings and user list
Read
X
X
X
X
Create a new automation for a collection
Create
X
X
X
Turn on Slack notifications for new version being added
Create
X
X
X
Create a new collection
Create
X
X
X
Create a new custom registry
Create
X
X
X
Edit collection card (description)
Update
X
X
X
Edit linked artifact description
Update
X
X
X
Add or delete a collection’s tag
Update
X
X
X
Add or delete an alias from a linked artifact
Update
X
X
X
Link a new artifact
Update
X
X
X
Edit allowed types list for a registry
Update
X
X
X
Edit custom registry name
Update
X
X
X
Delete a collection
Delete
X
X
X
Delete an automation
Delete
X
X
X
Unlink an artifact from a registry
Delete
X
X
X
Edit accepted artifact types for a registry
Admin
X
X
Change registry visibility (Organization or Restricted)
Admin
X
X
Add users to a registry
Admin
X
X
Assign or change a user’s role in a registry
Admin
X
X
Configure user roles in a registry
Navigate to the Registry App in the W&B App UI.
Select the registry you want to configure.
Click on the gear icon on the upper right hand corner.
Scroll to the Registry members and roles section.
Within the Member field, search for the user you want to edit permissions for.
Click on the user’s role within the Registry role column.
From the dropdown, select the role you want to assign to the user.
Remove a user from a registry
Navigate to the Registry App in the W&B App UI.
Select a core or custom registry.
Click on the gear icon on the upper right hand corner.
Scroll to the Registry members and roles section and type in the username of the member you want to remove.
Click the Delete button.
Registry visibility types
There are two registry visibility types: restricted or organization visibility. The following table describes who has access to the registry by default:
Visibility
Description
Default role
Example
Organization
Everyone in the org can access the registry.
By default, organization administrators are an admin for the registry. All other users are a viewer in the registry by default.
Core registry
Restricted
Only invited org members can access the registry.
The user who created the restricted registry is the only user in the registry by default, and is the organization’s owner.
Custom registry or core registry
Restrict visibility to a registry
Restrict who can view and access a custom registry. You can restrict visibility to a registry when you create a custom registry or after you create a custom registry. A custom registry can have either restricted or organization visibility. For more information on registry visibilities, see Registry visibility types.
The following steps describe how to restrict the visibility of a custom registry that already exists:
Navigate to the Registry App in the W&B App UI.
Select a registry.
Click on the gear icon on the upper right hand corner.
From the Registry visibility dropdown, select the desired registry visibility.
Continue if you select Restricted visibility:
Add members of your organization that you want to have access to this registry. Scroll to the Registry members and roles section and click on the Add member button.
Within the Member field, add the email or username of the member you want to add.
Click Add new member.
2.3.4 - Create a collection
A collection is a set of linked artifact versions within a registry. Each collection represents a distinct task or use case.
For example, within the core Dataset registry you might have multiple collections. Each collection contains a different dataset such as MNIST, CIFAR-10, or ImageNet.
As another example, you might have a registry called “chatbot” that contains a collection for model artifacts, another collection for dataset artifacts, and another collection for fine-tuned model artifacts.
How you organize a registry and their collections is up to you.
If you are familiar with W&B Model Registry, you might aware of registered models. Registered models in the Model Registry are now referred to as collections in the W&B Registry.
Collection types
Each collection accepts one, and only one, type of artifact. The type you specify restricts what sort of artifacts you, and other members of your organization, can link to that collection.
You can think of artifact types similar to data types in programming languages such as Python. In this analogy, a collection can store strings, integers, or floats but not a mix of these data types.
For example, suppose you create a collection that accepts “dataset” artifact types. This means that you can only link future artifact versions that have the type “dataset” to this collection. Similarly, you can only link artifacts of type “model” to a collection that accepts only model artifact types.
You specify an artifact’s type when you create that artifact object. Note the type field in wandb.Artifact():
When you create a collection, you can select from a list of predefined artifact types. The artifact types available to you depend on the registry that the collection belongs to. .
Check the types of artifact that a collection accepts
Before you link to a collection, inspect the artifact type that the collection accepts. You can inspect the artifact types that collection accepts programmatically with the W&B Python SDK or interactively with the W&B App
An error message appears if you try to create link an artifact to a collection that does not accept that artifact type.
You can find the accepted artifact types on the registry card on the homepage or within a registry’s settings page.
For both methods, first navigate to your W&B Registry App.
Within the homepage of the Registry App, you can view the accepted artifact types by scrolling to the registry card of that registry. The gray horizontal ovals within the registry card lists the artifact types that registry accepts.
For example, the preceding image shows multiple registry cards on the Registry App homepage. Within the Model registry card, you can see two artifact types: model and model-new.
To view accepted artifact types within a registry’s settings page:
Click on the registry card you want to view the settings for.
Click on the gear icon in the upper right corner.
Scroll to the Accepted artifact types field.
Programmatically view the artifact types that a registry accepts with the W&B Python SDK:
import wandb
registry_name ="<registry_name>"artifact_types = wandb.Api().project(name=f"wandb-registry-{registry_name}").artifact_types()
print(artifact_type.name for artifact_type in artifact_types)
Note that you do not initialize a run with the proceeding code snippet. This is because it is unnecessary to create a run if you are only querying the W&B API and not tracking an experiment, artifact and so on.
Once you know what type of artifact a collection accepts, you can create a collection.
Create a collection
Interactively or programmatically create a collection within a registry. You can not change the type of artifact that a collection accepts after you create it.
Programmatically create a collection
Use the wandb.init.link_artifact() method to link an artifact to a collection. Specify both the collection and the registry to the target_path field as a path that takes the form of:
Where registry_name is the name of the registry and collection_name is the name of the collection. Ensure to append the prefix wandb-registry- to the registry name.
W&B automatically creates a collection for you if you try to link an artifact to a collection that does not exist. If you specify a collection that does exists, W&B links the artifact to the existing collection.
The proceeding code snippet shows how to programmatically create a collection. Ensure to replace other the values enclosed in <> with your own:
import wandb
# Initialize a runrun = wandb.init(entity ="<team_entity>", project ="<project>")
# Create an artifact objectartifact = wandb.Artifact(
name ="<artifact_name>",
type ="<artifact_type>" )
registry_name ="<registry_name>"collection_name ="<collection_name>"target_path =f"wandb-registry-{registry_name}/{collection_name}"# Link the artifact to a collectionrun.link_artifact(artifact = artifact, target_path = target_path)
run.finish()
Interactively create a collection
The following steps describe how to create a collection within a registry using the W&B Registry App UI:
Navigate to the Registry App in the W&B App UI.
Select a registry.
Click on the Create collection button in the upper right hand corner.
Provide a name for your collection in the Name field.
Select a type from the Type dropdown. Or, if the registry enables custom artifact types, provide one or more artifact types that this collection accepts.
Optionally provide a description of your collection in the Description field.
Optionally add one or more tags in the Tags field.
Click Link version.
From the Project dropdown, select the project where your artifact is stored.
From the Artifact collection dropdown, select your artifact.
From the Version dropdown, select the artifact version you want to link to your collection.
Click on the Create collection button.
2.3.5 - Link an artifact version to a registry
Link artifact versions to a collection to make them available to other members in your organization.
When you link an artifact to a registry, this “publishes” that artifact to that registry. Any user that has access to that registry can access the linked artifact versions in the collection.
In other words, linking an artifact to a registry collection brings that artifact version from a private, project-level scope, to a shared organization level scope.
The term “type” refers to the artifact object’s type. When you create an artifact object (wandb.Artifact), or log an artifact (wandb.init.log_artifact), you specify a type for the type parameter.
Link an artifact to a collection
Link an artifact version to a collection interactively or programmatically.
Before you link an artifact to a registry, check the types of artifacts that collection permits. For more information about collection types, see “Collection types” within Create a collection.
Based on your use case, follow the instructions described in the tabs below to link an artifact version.
Before you link an artifact to a collection, ensure that the registry that the collection belongs to already exists. To check that the registry exists, navigate to the Registry app on the W&B App UI and search for the name of the registry.
Use the target_path parameter to specify the collection and registry you want to link the artifact version to. The target path consists of the prefix “wandb-registry”, the name of the registry, and the name of the collection separated by a forward slashes:
wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}
Copy and paste the code snippet below to link an artifact version to a collection within an existing registry. Replace values enclosed in <> with your own:
import wandb
# Initialize a runrun = wandb.init(
entity ="<team_entity>",
project ="<project_name>")
# Create an artifact object# The type parameter specifies both the type of the # artifact object and the collection typeartifact = wandb.Artifact(name ="<name>", type ="<type>")
# Add the file to the artifact object. # Specify the path to the file on your local machine.artifact.add_file(local_path ="<local_path_to_artifact>")
# Specify the collection and registry to link the artifact toREGISTRY_NAME ="<registry_name>"COLLECTION_NAME ="<collection_name>"target_path=f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}"# Link the artifact to the collectionrun.link_artifact(artifact = artifact, target_path = target_path)
If you want to link an artifact version to the Model registry or the Dataset registry, set the artifact type to "model" or "dataset", respectively.
Navigate to the Registry App.
Hover your mouse next to the name of the collection you want to link an artifact version to.
Select the meatball menu icon (three horizontal dots) next to View details.
From the dropdown, select Link new version.
From the sidebar that appears, select the name of a team from the Team dropdown.
From the Project dropdown, select the name of the project that contains your artifact.
From the Artifact dropdown, select the name of the artifact.
From the Version dropdown, select the artifact version you want to link to the collection.
Navigate to your project’s artifact browser on the W&B App at: https://wandb.ai/<entity>/<project>/artifacts
Select the Artifacts icon on the left sidebar.
Click on the artifact version you want to link to your registry.
Within the Version overview section, click the Link to registry button.
From the modal that appears on the right of the screen, select an artifact from the Select a register model menu dropdown.
Click Next step.
(Optional) Select an alias from the Aliases dropdown.
Click Link to registry.
View a linked artifact’s metadata, version data, usage, lineage information and more in the Registry App.
View linked artifacts in a registry
View information about linked artifacts such as metadata, lineage, and usage information in the Registry App.
Navigate to the Registry App.
Select the name of the registry that you linked the artifact to.
Select the name of the collection.
From the list of artifact versions, select the version you want to access. Version numbers are incrementally assigned to each linked artifact version starting with v0.
Once you select an artifact version, you can view that version’s metadata, lineage, and usage information.
Make note of the Full Name field within the Version tab. The full name of a linked artifact consists of the registry, collection name, and the alias or index of the artifact version.
You need the full name of a linked artifact to access the artifact version programmatically.
Troubleshooting
Below are some common things to double check if you are not able to link an artifact.
Logging artifacts from a personal account
Artifacts logged to W&B with a personal entity can not be linked to the registry. Make sure that you log artifacts using a team entity within your organization. Only artifacts logged within an organization’s team can be linked to the organization’s registry.
Ensure that you log an artifact with a team entity if you want to link that artifact to a registry.
Find your team entity
W&B uses the name of your team as the team’s entity. For example, if your team is called team-awesome, your team entity is team-awesome.
You can confirm the name of your team by:
Navigate to your team’s W&B profile page.
Copy the site’s URL. It has the form of https://wandb.ai/<team>. Where <team> is the both the name of your team and the team’s entity.
Log from a team entity
Specify the team as the entity when you initialize a run with wandb.init(). If you do not specify the entity when you initialize a run, the run uses your default entity which may or may not be your team entity.
import wandb
run = wandb.init(
entity='<team_entity>',
project='<project_name>' )
Log the artifact to the run either with run.log_artifact or by creating an Artifact object and then adding files to it with :
If an artifact is logged to your personal entity, you will need to re-log it to an entity within your organization.
Confirm the path of a registry in the W&B App UI
There are two ways to confirm the path of a registry with the UI: create an empty collection and view the collection details or copy and paste the autogenerated code on the collection’s home page.
Ensure that you replace the name of the collection from the temporary collection with the name of the collection that you want to link your artifact to.
2.3.6 - Download an artifact from a registry
Use the W&B Python SDK to download an artifact linked to a registry. To download and use an artifact, you need to know the name of the registry, the name of the collection, and the alias or index of the artifact version you want to download.
To download an artifact linked to a registry, you must know the path of that linked artifact. The path consists of the registry name, collection name, and the alias or index of the artifact version you want to access.
Once you have the registry, collection, and alias or index of the artifact version, you can construct the path to the linked artifact using the proceeding string template:
# Artifact name with version index specifiedf"wandb-registry-{REGISTRY}/{COLLECTION}:v{INDEX}"# Artifact name with alias specifiedf"wandb-registry-{REGISTRY}/{COLLECTION}:{ALIAS}"
Replace the values within the curly braces {} with the name of the registry, collection, and the alias or index of the artifact version you want to access.
Specify model or dataset to link an artifact version to the core Model registry or the core Dataset registry, respectively.
Use the wandb.init.use_artifact method to access the artifact and download its contents once you have the path of the linked artifact. The proceeding code snippet shows how to use and download an artifact linked to the W&B Registry. Ensure to replace values within <> with your own:
import wandb
REGISTRY ='<registry_name>'COLLECTION ='<collection_name>'ALIAS ='<artifact_alias>'run = wandb.init(
entity ='<team_name>',
project ='<project_name>' )
artifact_name =f"wandb-registry-{REGISTRY}/{COLLECTION}:{ALIAS}"# artifact_name = '<artifact_name>' # Copy and paste Full name specified on the Registry Appfetched_artifact = run.use_artifact(artifact_or_name = artifact_name)
download_path = fetched_artifact.download()
The .use_artifact() method both creates a run and marks the artifact you download as the input to that run.
Marking an artifact as the input to a run enables W&B to track the lineage of that artifact.
If you do not want to create a run, you can use the wandb.Api() object to access the artifact:
Example: Use and download an artifact linked to the W&B Registry
The proceeding code example shows how a user can download an artifact linked to a collection called phi3-finetuned in the Fine-tuned Models registry. The alias of the artifact version is set to production.
import wandb
TEAM_ENTITY ="product-team-applications"PROJECT_NAME ="user-stories"REGISTRY ="Fine-tuned Models"COLLECTION ="phi3-finetuned"ALIAS ='production'# Initialize a run inside the specified team and projectrun = wandb.init(entity=TEAM_ENTITY, project = PROJECT_NAME)
artifact_name =f"wandb-registry-{REGISTRY}/{COLLECTION}:{ALIAS}"# Access an artifact and mark it as input to your run for lineage trackingfetched_artifact = run.use_artifact(artifact_or_name = name)
# Download artifact. Returns path to downloaded contentsdownloaded_path = fetched_artifact.download()
See use_artifact and Artifact.download() in the API Reference guide for more information on possible parameters and return type.
Users with a personal entity that belong to multiple organizations
Users with a personal entity that belong to multiple organizations must also specify either the name of their organization or use a team entity when accessing artifacts linked to a registry.
import wandb
REGISTRY ="<registry_name>"COLLECTION ="<collection_name>"VERSION ="<version>"# Ensure you are using your team entity to instantiate the APIapi = wandb.Api(overrides={"entity": "<team-entity>"})
artifact_name =f"wandb-registry-{REGISTRY}/{COLLECTION}:{VERSION}"artifact = api.artifact(name = artifact_name)
# Use org display name or org entity in the pathapi = wandb.Api()
artifact_name =f"{ORG_NAME}/wandb-registry-{REGISTRY}/{COLLECTION}:{VERSION}"artifact = api.artifact(name = artifact_name)
Where the ORG_NAME is the display name of your organization. Multi-tenant SaaS users can find the name of their organization in the organization’s settings page at https://wandb.ai/account-settings/. Dedicated Cloud and Self-Managed users, contact your account administrator to confirm your organization’s display name.
Copy and paste pre-generated code snippet
W&B creates a code snippet that you can copy and paste into your Python script, notebook, or terminal to download an artifact linked to a registry.
Navigate to the Registry App.
Select the name of the registry that contains your artifact.
Select the name of the collection.
From the list of artifact versions, select the version you want to access.
Select the Usage tab.
Copy the code snippet shown in the Usage API section.
Paste the code snippet into your Python script, notebook, or terminal.
2.3.7 - Organize versions with tags
Use tags to organize collections or artifact versions within collections. You can add, remove, edit tags with the Python SDK or W&B App UI.
Create and add tags to organize your collections or artifact versions within your registry. Add, modify, view, or remove tags to a collection or artifact version with the W&B App UI or the W&B Python SDK.
When to use a tag versus using an alias
Use aliases when you need to reference a specific artifact version uniquely. For example, use an alias such as ‘production’ or ’latest’ to ensure that artifact_name:alias always points to a single, specific version.
Use tags when you want more flexibility for grouping or searching. Tags are ideal when multiple versions or collections can share the same label, and you don’t need the guarantee that only one version is associated with a specific identifier.
Add a tag to a collection
Use the W&B App UI or Python SDK to add a tag to a collection:
Update a tag programmatically by reassigning or by mutating the tags attribute. W&B recommends, and it is good Python practice, that you reassign the tags attribute instead of in-place mutation.
For example, the proceeding code snippet shows common ways to update a list with reassignment. For brevity, we continue the code example from the Add a tag to a collection section:
Click View details next to the name of the collection you want to add a tag to
Scroll down to Versions
Click View next to an artifact version
Within the Version tab, click on the plus icon (+) next to the Tags field and type in the name of the tag
Press Enter on your keyboard
Fetch the artifact version you want to add or update a tag to. Once you have the artifact version, you can access the artifact object’s tag attribute to add or modify tags to that artifact. Pass in one or more tags as list to the artifacts tag attribute.
Like other artifacts, you can fetch an artifact from W&B without creating a run or you can create a run and fetch the artifact within that run. In either case, ensure to call the artifact object’s save method to update the artifact on the W&B servers.
Copy and paste an appropriate code cells below to add or modify an artifact version’s tag. Replace the values in <> with your own.
The proceeding code snippet shows how to fetch an artifact and add a tag without creating a new run:
import wandb
ARTIFACT_TYPE ="<TYPE>"ORG_NAME ="<org_name>"REGISTRY_NAME ="<registry_name>"COLLECTION_NAME ="<collection_name>"VERSION ="<artifact_version>"artifact_name =f"{ORG_NAME}/wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"artifact = wandb.Api().artifact(name = artifact_name, type = ARTIFACT_TYPE)
artifact.tags = ["tag2"] # Provide one or more tags in a listartifact.save()
The proceeding code snippet shows how to fetch an artifact and add a tag by creating a new run:
import wandb
ORG_NAME ="<org_name>"REGISTRY_NAME ="<registry_name>"COLLECTION_NAME ="<collection_name>"VERSION ="<artifact_version>"run = wandb.init(entity ="<entity>", project="<project>")
artifact_name =f"{ORG_NAME}/wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"artifact = run.use_artifact(artifact_or_name = artifact_name)
artifact.tags = ["tag2"] # Provide one or more tags in a listartifact.save()
Update tags that belong to an artifact version
Update a tag programmatically by reassigning or by mutating the tags attribute. W&B recommends, and it is good Python practice, that you reassign the tags attribute instead of in-place mutation.
For example, the proceeding code snippet shows common ways to update a list with reassignment. For brevity, we continue the code example from the Add a tag to an artifact version section:
Click View details next to the name of the collection you want to add a tag to
Scroll down to Versions section
If an artifact version has one or more tags, you can view those tags within the Tags column.
Fetch the artifact version to view its tags. Once you have the artifact version, you can view tags that belong to that artifact by viewing the artifact object’s tag attribute.
Like other artifacts, you can fetch an artifact from W&B without creating a run or you can create a run and fetch the artifact within that run.
Copy and paste an appropriate code cells below to add or modify an artifact version’s tag. Replace the values in <> with your own.
The proceeding code snippet shows how to fetch and view an artifact version’s tags without creating a new run:
Use the W&B Python SDK to find artifact versions that have a set of tags:
import wandb
api = wandb.Api()
tagged_artifact_versions = api.artifacts(
type_name ="<artifact_type>",
name ="<artifact_name>",
tags = ["<tag_1>", "<tag_2>"]
)
for artifact_version in tagged_artifact_versions:
print(artifact_version.tags)
2.3.8 - Migrate from legacy Model Registry
W&B will transition assets from the legacy W&B Model Registry to the new W&B Registry. This migration will be fully managed and triggered by W&B, requiring no intervention from users. The process is designed to be as seamless as possible, with minimal disruption to existing workflows.
The transition will take place once the new W&B Registry includes all the functionalities currently available in the Model Registry. W&B will attempt to preserve current workflows, codebases, and references.
This guide is a living document and will be updated regularly as more information becomes available. For any questions or support, contact support@wandb.com.
How W&B Registry differs from the legacy Model Registry
W&B Registry introduces a range of new features and enhancements designed to provide a more robust and flexible environment for managing models, datasets, and other artifacts.
To view the legacy Model Registry, navigate to the Model Registry in the W&B App. A banner appears at the top of the page that enables you to use the legacy Model Registry App UI.
Organizational visibility
Artifacts linked to the legacy Model Registry have team level visibility. This means that only members of your team can view your artifacts in the legacy W&B Model Registry. W&B Registry has organization level visibility. This means that members across an organization, with correct permissions, can view artifacts linked to a registry.
Restrict visibility to a registry
Restrict who can view and access a custom registry. You can restrict visibility to a registry when you create a custom registry or after you create a custom registry. In a Restricted registry, only selected members can access the content, maintaining privacy and control. For more information about registry visibility, see Registry visibility types.
Create custom registries
Unlike the legacy Model Registry, W&B Registry is not limited to models or dataset registries. You can create custom registries tailored to specific workflows or project needs, capable of holding any arbitrary object type. This flexibility allows teams to organize and manage artifacts according to their unique requirements. For more information on how to create a custom registry, see Create a custom registry.
Custom access control
Each registry supports detailed access control, where members can be assigned specific roles such as Admin, Member, or Viewer. Admins can manage registry settings, including adding or removing members, setting roles, and configuring visibility. This ensures that teams have the necessary control over who can view, manage, and interact with the artifacts in their registries.
Terminology update
Registered models are now referred to as collections.
Summary of changes
Legacy W&B Model Registry
W&B Registry
Artifact visibility
Only members of team can view or access artifacts
Members in your organization, with correct permissions, can view or access artifacts linked to a registry
Custom access control
Not available
Available
Custom registry
Not available
Available
Terminology update
A set of pointers (links) to model versions are called registered models.
A set of pointers (links) to artifact versions are called collections.
wandb.init.link_model
Model Registry specific API
Currently only compatible with legacy model registry
Preparing for the migration
W&B will migrate registered models (now called collections) and associated artifact versions from the legacy Model Registry to the W&B Registry. This process will be conducted automatically, with no action required from users.
Team visibility to organization visibility
After the migration, your model registry will have organization level visibility. You can restrict who has access to a registry by assigning roles. This helps ensure that only specific members have access to specific registries.
The migration will preserve existing permission boundaries of your current team-level registered models (soon to be called collections) in the legacy W&B Model Registry. Permissions currently defined in the legacy Model Registry will be preserved in the new Registry. This means that collections currently restricted to specific team members will remain protected during and after the migration.
Artifact path continuity
No action is currently required.
During the migration
W&B will initiate the migration process. The migration will occur during a time window that minimizes disruption to W&B services. The legacy Model Registry will transition to a read-only state once the migration begins and will remain accessible for reference.
After the migration
Post-migration, collections, artifact versions, and associated attributes will be fully accessible within the new W&B Registry. The focus is on ensuring that current workflows remain intact, with ongoing support available to help navigate any changes.
Using the new registry
Users are encouraged to explore the new features and capabilities available in the W&B Registry. The Registry will not only support the functionalities currently relied upon but also introduces enhancements such as custom registries, improved visibility, and flexible access controls.
Support is available if you are interested in trying the W&B Registry early, or for new users that prefer to start with Registry and not the legacy W&B Model Registry. Contact support@wandb.com or your Sales MLE to enable this functionality. Note that any early migration will be into a BETA version. The BETA version of W&B Registry might not have all the functionality or features of the legacy Model Registry.
For more details and to learn about the full range of features in the W&B Registry, visit the W&B Registry Guide.
FAQs
Why is W&B migrating assets from Model Registry to W&B Registry?
W&B is evolving its platform to offer more advanced features and capabilities with the new Registry. This migration is a step towards providing a more integrated and powerful toolset for managing models, datasets, and other artifacts.
What needs to be done before the migration?
No action is required from users before the migration. W&B will handle the transition, ensuring that workflows and references are preserved.
Will access to model artifacts be lost?
No, access to model artifacts will be retained after the migration. The legacy Model Registry will remain in a read-only state, and all relevant data will be migrated to the new Registry.
Will metadata related to artifacts be preserved?
Yes, important metadata related to artifact creation, lineage, and other attributes will be preserved during the migration. Users will continue to have access to all relevant metadata after the migration, ensuring that the integrity and traceability of their artifacts remain intact.
Who do I contact if I need help?
Support is available for any questions or concerns. Reach out to support@wandb.com for assistance.
2.3.9 - Model registry
Model registry to manage the model lifecycle from training to production
W&B will no longer support W&B Model Registry after 2024. Users are encouraged to instead use W&B Registry for linking and sharing their model artifacts versions. W&B Registry broadens the capabilities of the legacy W&B Model Registry. For more information about W&B Registry, see the Registry docs.
W&B will migrate existing model artifacts linked to the legacy Model Registry to the new W&B Registry in the Fall or early Winter of 2024. See Migrating from legacy Model Registry for information about the migration process.
The W&B Model Registry houses a team’s trained models where ML Practitioners can publish candidates for production to be consumed by downstream teams and stakeholders. It is used to house staged/candidate models and manage workflows associated with staging.
Move model versions through its ML lifecycle; from staging to production.
Track a model’s lineage and audit the history of changes to production models.
How it works
Track and manage your staged models with a few simple steps.
Log a model version: In your training script, add a few lines of code to save the model files as an artifact to W&B.
Compare performance: Check live charts to compare the metrics and sample predictions from model training and validation. Identify which model version performed the best.
Link to registry: Bookmark the best model version by linking it to a registered model, either programmatically in Python or interactively in the W&B UI.
The following code snippet demonstrates how to log and link a model to the Model Registry:
import wandb
import random
# Start a new W&B runrun = wandb.init(project="models_quickstart")
# Simulate logging model metricsrun.log({"acc": random.random()})
# Create a simulated model filewith open("my_model.h5", "w") as f:
f.write("Model: "+ str(random.random()))
# Log and link the model to the Model Registryrun.link_model(path="./my_model.h5", registered_model_name="MNIST")
run.finish()
Connect model transitions to CI/DC workflows: transition candidate models through workflow stages and automate downstream actions with webhooks or jobs.
How to get started
Depending on your use case, explore the following resources to get started with W&B Models:
Use the W&B Model Registry to manage and version your models, track lineage, and promote models through different lifecycle stages
Automate your model management workflows using webhooks.
See how the Model Registry integrates with external ML systems and tools in your model development lifecycle for model evaluation, monitoring, and deployment.
2.3.9.1 - Tutorial: Use W&B for model management
Learn how to use W&B for Model Management
The following walkthrough shows you how to log a model to W&B. By the end of the walkthrough you will:
Create and train a model with the MNIST dataset and the Keras framework.
Log the model that you trained to a W&B project
Mark the dataset used as a dependency to the model you created
Link the model to the W&B Registry.
Evaluate the performance of the model you link to the registry
Mark a model version ready for production.
Copy the code snippets in the order presented in this guide.
Code not unique to the Model Registry are hidden in collapsible cells.
Setting up
Before you get started, import the Python dependencies required for this walkthrough:
import wandb
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from wandb.integration.keras import WandbMetricsLogger
from sklearn.model_selection import train_test_split
Provide your W&B entity to the entity variable:
entity ="<entity>"
Create a dataset artifact
First, create a dataset. The proceeding code snippet creates a function that downloads the MNIST dataset:
Next, upload the dataset to W&B. To do this, create an artifact object and add the dataset to that artifact.
project ="model-registry-dev"model_use_case_id ="mnist"job_type ="build_dataset"# Initialize a W&B runrun = wandb.init(entity=entity, project=project, job_type=job_type)
# Create W&B Table for training datatrain_table = wandb.Table(data=[], columns=[])
train_table.add_column("x_train", x_train)
train_table.add_column("y_train", y_train)
train_table.add_computed_columns(lambda ndx, row: {"img": wandb.Image(row["x_train"])})
# Create W&B Table for eval dataeval_table = wandb.Table(data=[], columns=[])
eval_table.add_column("x_eval", x_eval)
eval_table.add_column("y_eval", y_eval)
eval_table.add_computed_columns(lambda ndx, row: {"img": wandb.Image(row["x_eval"])})
# Create an artifact objectartifact_name ="{}_dataset".format(model_use_case_id)
artifact = wandb.Artifact(name=artifact_name, type="dataset")
# Add wandb.WBValue obj to the artifact.artifact.add(train_table, "train_table")
artifact.add(eval_table, "eval_table")
# Persist any changes made to the artifact.artifact.save()
# Tell W&B this run is finished.run.finish()
Storing files (such as datasets) to an artifact is useful in the context of logging models because you lets you track a model’s dependencies.
Train a model
Train a model with the artifact dataset you created in the previous step.
Declare dataset artifact as an input to the run
Declare the dataset artifact you created in a previous step as the input to the W&B run. This is particularly useful in the context of logging models because declaring an artifact as an input to a run lets you track the dataset (and the version of the dataset) used to train a specific model. W&B uses the information collected to create a lineage map.
Use the use_artifact API to both declare the dataset artifact as the input of the run and to retrieve the artifact itself.
job_type ="train_model"config = {
"optimizer": "adam",
"batch_size": 128,
"epochs": 5,
"validation_split": 0.1,
}
# Initialize a W&B runrun = wandb.init(project=project, job_type=job_type, config=config)
# Retrieve the dataset artifactversion ="latest"name ="{}:{}".format("{}_dataset".format(model_use_case_id), version)
artifact = run.use_artifact(artifact_or_name=name)
# Get specific content from the dataframetrain_table = artifact.get("train_table")
x_train = train_table.get_column("x_train", convert_to="numpy")
y_train = train_table.get_column("y_train", convert_to="numpy")
For more information about tracking the inputs and output of a model, see Create model lineage map.
Define and train model
For this walkthrough, define a 2D Convolutional Neural Network (CNN) with Keras to classify images from the MNIST dataset.
Train CNN on MNIST data
# Store values from our config dictionary into variables for easy accessingnum_classes =10input_shape = (28, 28, 1)
loss ="categorical_crossentropy"optimizer = run.config["optimizer"]
metrics = ["accuracy"]
batch_size = run.config["batch_size"]
epochs = run.config["epochs"]
validation_split = run.config["validation_split"]
# Create model architecturemodel = keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)
# Generate labels for training datay_train = keras.utils.to_categorical(y_train, num_classes)
# Create training and test setx_t, x_v, y_t, y_v = train_test_split(x_train, y_train, test_size=0.33)
W&B creates a registered model for you if the name you specify for registered-model-name does not already exist.
See link_model in the API Reference guide for more information on optional parameters.
Evaluate the performance of a model
It is common practice to evaluate the performance of a one or more models.
First, get the evaluation dataset artifact stored in W&B in a previous step.
job_type ="evaluate_model"# Initialize a runrun = wandb.init(project=project, entity=entity, job_type=job_type)
model_use_case_id ="mnist"version ="latest"# Get dataset artifact, mark it as a dependencyartifact = run.use_artifact(
"{}:{}".format("{}_dataset".format(model_use_case_id), version)
)
# Get desired dataframeeval_table = artifact.get("eval_table")
x_eval = eval_table.get_column("x_eval", convert_to="numpy")
y_eval = eval_table.get_column("y_eval", convert_to="numpy")
Download the model version from W&B that you want to evaluate. Use the use_model API to access and download your model.
alias ="latest"# aliasname ="mnist_model"# name of the model artifact# Access and download model. Returns path to downloaded artifactdownloaded_model_path = run.use_model(name=f"{name}:{alias}")
# # Log metrics, images, tables, or any data useful for evaluation.run.log(data={"loss": (loss, _)})
Promote a model version
Mark a model version ready for the next stage of your machine learning workflow with a model alias. Each registered model can have one or more model aliases. A model alias can only belong to a single model version at a time.
For example, suppose that after evaluating a model’s performance, you are confident that the model is ready for production. To promote that model version, add the production alias to that specific model version.
The production alias is one of the most common aliases used to mark a model as production-ready.
You can add an alias to a model version interactively with the W&B App UI or programmatically with the Python SDK. The following steps show how to add an alias with the W&B Model Registry App:
A model version represents a single model checkpoint. Model versions are a snapshot at a point in time of a model and its files within an experiment.
A model version is an immutable directory of data and metadata that describes a trained model. W&B suggests that you add files to your model version that let you store (and restore) your model architecture and learned parameters at a later date.
A model version belongs to one, and only one, model artifact. A model version can belong to zero or more, registered models. Model versions are stored in a model artifact in the order they are logged to the model artifact. W&B automatically creates a new model version if it detects that a model you log (to the same model artifact) has different contents than a previous model version.
Store files within model versions that are produced from the serialization process provided by your modeling library (for example, PyTorch and Keras).
Model alias
Model aliases are mutable strings that allow you to uniquely identify or reference a model version in your registered model with a semantically related identifier. You can only assign an alias to one version of a registered model. This is because an alias should refer to a unique version when used programmatically. It also allows aliases to be used to capture a model’s state (champion, candidate, production).
It is common practice to use aliases such as "best", "latest", "production", or "staging" to mark model versions with special purposes.
For example, suppose you create a model and assign it a "best" alias. You can refer to that specific model with run.use_model
import wandb
run = wandb.init()
name =f"{entity/project/model_artifact_name}:{alias}"run.use_model(name=name)
Model tags
Model tags are keywords or labels that belong to one or more registered models.
Use model tags to organize registered models into categories and to search over those categories in the Model Registry’s search bar. Model tags appear at the top of the Registered Model Card. You might choose to use them to group your registered models by ML task, owning team, or priority. The same model tag can be added to multiple registered models to allow for grouping.
Model tags, which are labels applied to registered models for grouping and discoverability, are different from model aliases. Model aliases are unique identifiers or nicknames that you use to fetch a model version programatically. To learn more about using tags to organize the tasks in your Model Registry, see Organize models.
Model artifact
A model artifact is a collection of logged model versions. Model versions are stored in a model artifact in the order they are logged to the model artifact.
A model artifact can contain one or more model versions. A model artifact can be empty if no model versions are logged to it.
For example, suppose you create a model artifact. During model training, you periodically save your model during checkpoints. Each checkpoint corresponds to its own model version. All of the model versions created during your model training and checkpoint saving are stored in the same model artifact you created at the beginning of your training script.
The proceeding image shows a model artifact that contains three model versions: v0, v1, and v2.
A registered model is a collection of pointers (links) to model versions. You can think of a registered model as a folder of “bookmarks” of candidate models for the same ML task. Each “bookmark” of a registered model is a pointer to a model version that belongs to a model artifact. You can use model tags to group your registered models.
Registered models often represent candidate models for a single modeling use case or task. For example, you might create registered model for different image classification task based on the model you use: ImageClassifier-ResNet50, ImageClassifier-VGG16, DogBreedClassifier-MobileNetV2 and so on. Model versions are assigned version numbers in the order in which they were linked to the registered model.
Track a model, the model’s dependencies, and other information relevant to that model with the W&B Python SDK.
Track a model, the model’s dependencies, and other information relevant to that model with the W&B Python SDK.
Under the hood, W&B creates a lineage of model artifact that you can view with the W&B App UI or programmatically with the W&B Python SDK. See the Create model lineage map for more information.
How to log a model
Use the run.log_model API to log a model. Provide the path where your model files are saved to the path parameter. The path can be a local file, directory, or reference URI to an external bucket such as s3://bucket/path.
Optionally provide a name for the model artifact for the name parameter. If name is not specified, W&B uses the basename of the input path prepended with the run ID.
Copy and paste the proceeding code snippet. Ensure to replace values enclosed in <> with your own.
import wandb
# Initialize a W&B runrun = wandb.init(project="<project>", entity="<entity>")
# Log the modelrun.log_model(path="<path-to-model>", name="<name>")
Example: Log a Keras model to W&B
The proceeding code example shows how to log a convolutional neural network (CNN) model to W&B.
import os
import wandb
from tensorflow import keras
from tensorflow.keras import layers
config = {"optimizer": "adam", "loss": "categorical_crossentropy"}
# Initialize a W&B runrun = wandb.init(entity="charlie", project="mnist-project", config=config)
# Training algorithmloss = run.config["loss"]
optimizer = run.config["optimizer"]
metrics = ["accuracy"]
num_classes =10input_shape = (28, 28, 1)
model = keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)
# Save modelmodel_filename ="model.h5"local_filepath ="./"full_path = os.path.join(local_filepath, model_filename)
model.save(filepath=full_path)
# Log the model# highlight-next-linerun.log_model(path=full_path, name="MNIST")
# Explicitly tell W&B to end the run.run.finish()
2.3.9.4 - Create a registered model
Create a registered model to hold all the candidate models for your modeling tasks.
Create a registered model to hold all the candidate models for your modeling tasks. You can create a registered model interactively within the Model Registry or programmatically with the Python SDK.
Programmatically create registered a model
Programmatically register a model with the W&B Python SDK. W&B automatically creates a registered model for you if the registered model doesn’t exist.
Ensure to replace other the values enclosed in <> with your own:
import wandb
run = wandb.init(entity="<entity>", project="<project>")
run.link_model(path="<path-to-model>", registered_model_name="<registered-model-name>")
run.finish()
The name you provide for registered_model_name is the name that appears in the Model Registry App.
For example, suppose you have a nightly job. It is tedious to manually link a model created each night. Instead, you could create a script that evaluates the model, and if the model improves in performance, link that model to the model registry with the W&B Python SDK.
2.3.9.5 - Link a model version
Link a model version to a registered model with the W&B App or programmatically with the Python SDK.
Link a model version to a registered model with the W&B App or programmatically with the Python SDK.
Programmatically link a model
Use the link_model method to programmatically log model files to a W&B run and link it to the W&B Model Registry.
Ensure to replace other the values enclosed in <> with your own:
import wandb
run = wandb.init(entity="<entity>", project="<project>")
run.link_model(path="<path-to-model>", registered_model_name="<registered-model-name>")
run.finish()
W&B creates a registered model for you if the name you specify for the registered-model-name parameter does not already exist.
For example, suppose you have an existing registered model named “Fine-Tuned-Review-Autocompletion”(registered-model-name="Fine-Tuned-Review-Autocompletion") in your Model Registry. And suppose that a few model versions are linked to it: v0, v1, v2. If you programmatically link a new model and use the same registered model name (registered-model-name="Fine-Tuned-Review-Autocompletion"), W&B links this model to the existing registered model and assigns it a model version v3. If no registered model with this name exists, a new one registered model is created and it will have a model version v0.
Hover your mouse next to the name of the registered model you want to link a new model to.
Select the meatball menu icon (three horizontal dots) next to View details.
From the dropdown, select Link new version.
From the Project dropdown, select the name of the project that contains your model.
From the Model Artifact dropdown, select the name of the model artifact.
From the Version dropdown, select the model version you want to link to the registered model.
Navigate to your project’s artifact browser on the W&B App at: https://wandb.ai/<entity>/<project>/artifacts
Select the Artifacts icon on the left sidebar.
Click on the model version you want to link to your registry.
Within the Version overview section, click the Link to registry button.
From the modal that appears on the right of the screen, select a registered model from the Select a register model menu dropdown.
Click Next step.
(Optional) Select an alias from the Aliases dropdown.
Click Link to registry.
View the source of linked models
There are two ways to view the source of linked models: The artifact browser within the project that the model is logged to and the W&B Model Registry.
A pointer connects a specific model version in the model registry to the source model artifact (located within the project the model is logged to). The source model artifact also has a pointer to the model registry.
Select View details next the name of your registered model.
Within the Versions section, select View next to the model version you want to investigate.
Click on the Version tab within the right panel.
Within the Version overview section there is a row that contains a Source Version field. The Source Version field shows both the name of the model and the model’s version.
For example, the following image shows a v0 model version called mnist_model (see Source version field mnist_model:v0), linked to a registered model called MNIST-dev.
Navigate to your project’s artifact browser on the W&B App at: https://wandb.ai/<entity>/<project>/artifacts
Select the Artifacts icon on the left sidebar.
Expand the model dropdown menu from the Artifacts panel.
Select the name and version of the model linked to the model registry.
Click on the Version tab within the right panel.
Within the Version overview section there is a row that contains a Linked To field. The Linked To field shows both the name of the registered model and the version it possesses(registered-model-name:version).
For example, in the following image, there is a registered model called MNIST-dev (see the Linked To field). A model version called mnist_model with a version v0(mnist_model:v0) points to the MNIST-dev registered model.
2.3.9.6 - Organize models
Use model tags to organize registered models into categories and to search over those categories.
Select View details next to the name of the registered model you want to add a model tag to.
Scroll to the Model card section.
Click the plus button (+) next to the Tags field.
Type in the name for your tag or search for a pre-existing model tag.
For example. the following image shows multiple model tags added to a registered model called FineTuned-Review-Autocompletion:
2.3.9.7 - Create model lineage map
A useful feature of logging model artifacts to W&B are lineage graphs. Lineage graphs show artifacts logged by a run as well as artifacts used by specific run.
This means that, when you log a model artifact, you at a minimum have access to view the W&B run that used or produced the model artifact. If you track a dependency, you also see the inputs used by the model artifact.
For example, the proceeding image shows artifacts created and used throughout an ML experiment:
From left to right, the image shows:
The jumping-monkey-1 W&B run created the mnist_dataset:v0 dataset artifact.
The vague-morning-5 W&B run trained a model using the mnist_dataset:v0 dataset artifact. The output of this W&B run was a model artifact called mnist_model:v0.
A run called serene-haze-6 used the model artifact (mnist_model:v0) to evaluate the model.
Track an artifact dependency
Declare an dataset artifact as an input to a W&B run with the use_artifact API to track a dependency.
The proceeding code snippet shows how to use the use_artifact API:
# Initialize a runrun = wandb.init(project=project, entity=entity)
# Get artifact, mark it as a dependencyartifact = run.use_artifact(artifact_or_name="name", aliases="<alias>")
Once you have retrieved your artifact, you can use that artifact to (for example), evaluate the performance of a model.
Example: Train a model and track a dataset as the input of a model
job_type ="train_model"config = {
"optimizer": "adam",
"batch_size": 128,
"epochs": 5,
"validation_split": 0.1,
}
run = wandb.init(project=project, job_type=job_type, config=config)
version ="latest"name ="{}:{}".format("{}_dataset".format(model_use_case_id), version)
# highlight-startartifact = run.use_artifact(name)
# highlight-endtrain_table = artifact.get("train_table")
x_train = train_table.get_column("x_train", convert_to="numpy")
y_train = train_table.get_column("y_train", convert_to="numpy")
# Store values from our config dictionary into variables for easy accessingnum_classes =10input_shape = (28, 28, 1)
loss ="categorical_crossentropy"optimizer = run.config["optimizer"]
metrics = ["accuracy"]
batch_size = run.config["batch_size"]
epochs = run.config["epochs"]
validation_split = run.config["validation_split"]
# Create model architecturemodel = keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)
# Generate labels for training datay_train = keras.utils.to_categorical(y_train, num_classes)
# Create training and test setx_t, x_v, y_t, y_v = train_test_split(x_train, y_train, test_size=0.33)
# Train the modelmodel.fit(
x=x_t,
y=y_t,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_v, y_v),
callbacks=[WandbCallback(log_weights=True, log_evaluation=True)],
)
# Save model locallypath ="model.h5"model.save(path)
path ="./model.h5"registered_model_name ="MNIST-dev"name ="mnist_model"# highlight-startrun.link_model(path=path, registered_model_name=registered_model_name, name=name)
# highlight-endrun.finish()
2.3.9.8 - Document machine learning model
Add descriptions to model card to document your model
Add a description to the model card of your registered model to document aspects of your machine learning model. Some topics worth documenting include:
Summary: A summary of what the model is. The purpose of the model. The machine learning framework the model uses, and so forth.
Training data: Describe the training data used, processing done on the training data set, where is that data stored and so forth.
Architecture: Information about the model architecture, layers, and any specific design choices.
Deserialize the model: Provide information on how someone on your team can load the model into memory.
Task: The specific type of task or problem that the machine learning model is designed to perform. It’s a categorization of the model’s intended capability.
License: The legal terms and permissions associated with the use of the machine learning model. It helps model users understand the legal framework under which they can utilize the model.
References: Citations or references to relevant research papers, datasets, or external resources.
Deployment: Details on how and where the model is deployed and guidance on how the model is integrated into other enterprise systems, such as a workflow orchestration platforms.
Select View details next to the name of the registered model you want to create a model card for.
Go to the Model card section.
Within the Description field, provide information about your machine learning model. Format text within a model card with Markdown markup language.
For example, the following images shows the model card of a Credit-card Default Prediction registered model.
2.3.9.9 - Download a model version
How to download a model with W&B Python SDK
Use the W&B Python SDK to download a model artifact that you linked to the Model Registry.
You are responsible for providing additional Python functions, API calls to reconstruct, deserialize your model into a form that you can work with.
W&B suggests that you document information on how to load models into memory with model cards. For more information, see the Document machine learning models page.
Replace values within <> with your own:
import wandb
# Initialize a runrun = wandb.init(project="<project>", entity="<entity>")
# Access and download model. Returns path to downloaded artifactdownloaded_model_path = run.use_model(name="<your-model-name>")
Reference a model version with one of following formats listed:
latest - Use latest alias to specify the model version that is most recently linked.
v# - Use v0, v1, v2, and so on to fetch a specific version in the Registered Model
alias - Specify the custom alias that you and your team assigned to your model version
See use_model in the API Reference guide for more information on possible parameters and return type.
Example: Download and use a logged model
For example, in the proceeding code snippet a user called the use_model API. They specified the name of the model artifact they want to fetch and they also provided a version/alias. They then stored the path that returned from the API to the downloaded_model_path variable.
import wandb
entity ="luka"project ="NLP_Experiments"alias ="latest"# semantic nickname or identifier for the model versionmodel_artifact_name ="fine-tuned-model"# Initialize a runrun = wandb.init()
# Access and download model. Returns path to downloaded artifactdownloaded_model_path = run.use_model(name=f"{entity/project/model_artifact_name}:{alias}")
Planned deprecation for W&B Model Registry in 2024
The proceeding tabs demonstrate how to consume model artifacts using the soon to be deprecated Model Registry.
Use the W&B Registry to track, organize and consume model artifacts. For more information see the Registry docs.
Replace values within <> with your own:
import wandb
# Initialize a runrun = wandb.init(project="<project>", entity="<entity>")
# Access and download model. Returns path to downloaded artifactdownloaded_model_path = run.use_model(name="<your-model-name>")
Reference a model version with one of following formats listed:
latest - Use latest alias to specify the model version that is most recently linked.
v# - Use v0, v1, v2, and so on to fetch a specific version in the Registered Model
alias - Specify the custom alias that you and your team assigned to your model version
See use_model in the API Reference guide for more information on possible parameters and return type.
Select the registered model you want to receive notifications from.
Click on the Connect Slack button.
Follow the instructions to enable W&B in your Slack workspace that appear on the OAuth page.
Once you have configured Slack notifications for your team, you can pick and choose registered models to get notifications from.
A toggle that reads New model version linked to… appears instead of a Connect Slack button if you have Slack notifications configured for your team.
The screenshot below shows a FMNIST classifier registered model that has Slack notifications.
A message is automatically posted to the connected Slack channel each time a new model version is linked to the FMNIST classifier registered model.
2.3.9.11 - Manage data governance and access control
Use model registry role based access controls (RBAC) to control who can update protected aliases.
Use protected aliases to represent key stages of your model development pipeline. Only Model Registry Administrators can add, modify, or remove protected aliases. Model registry admins can define and use protected aliases. W&B blocks non admin users from adding or removing protected aliases from model versions.
Only Team admins or current registry admins can manage the list of registry admins.
For example, suppose you set staging and production as protected aliases. Any member of your team can add new model versions. However, only admins can add a staging or production alias.
Set up access control
The following steps describe how to set up access controls for your team’s model registry.
Select the gear button on the top right of the page.
Scroll down to the Protected Aliases section.
Click on the plus icon (+) icon to add new a new alias.
2.4 - Automations
2.4.1 - Model registry automations
Use an Automation for model CI (automated model evaluation pipelines) and model deployment.
Create an automation to trigger workflow steps, such as automated model testing and deployment. To create an automation, define the action you want to occur based on an event type.
For example, you can create a trigger that automatically deploys a model to GitHub when you add a new version of a registered model.
Looking for companion tutorials for automations?
This tutorial shows you how to set up an automation that triggers a Github Action for model evaluation and deployment
This video series shows webhook basics and how to set them up in W&B.
This demo details how to setup an automation to deploy a model to a Sagemaker Endpoint
Event types
An event is a change that takes place in the W&B ecosystem. The Model Registry supports two event types:
Use Linking a new artifact to a registered model to test new model candidates.
Use Adding a new alias to a version of the registered model to specify an alias that represents a special step of your workflow, like deploy, and any time a new model version has that alias applied.
Automate a webhook based on an action with the W&B App UI. To do this, first establish a webhook, then configure the webhook automation.
Your webhook’s endpoint must have a fully qualified domain name. W&B does not support connecting to an endpoint by IP address or by a hostname such as localhost. This restriction helps protect against server-side request forgery (SSRF) attacks and other related threat vectors.
Add a secret for authentication or authorization
Secrets are team-level variables that let you obfuscate private strings such as credentials, API keys, passwords, tokens, and more. W&B recommends you use secrets to store any string that you want to protect the plain text content of.
To use a secret in your webhook, you must first add that secret to your team’s secret manager.
Only W&B Admins can create, edit, or delete a secret.
Skip this section if the external server you send HTTP POST requests to does not use secrets.
Secrets are also available if you use W&B Server in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type.
There are two types of secrets W&B suggests that you create when you use a webhook automation:
Access tokens: Authorize senders to help secure webhook requests
Secret: Ensure the authenticity and integrity of data transmitted from payloads
Follow the instructions below to create a webhook:
Navigate to the W&B App UI.
Click on Team Settings.
Scroll down the page until you find the Team secrets section.
Click on the New secret button.
A modal will appear. Provide a name for your secret in the Secret name field.
Add your secret into the Secret field.
(Optional) Repeat steps 5 and 6 to create another secret (such as an access token) if your webhook requires additional secret keys or tokens to authenticate your webhook.
Specify the secrets you want to use for your webhook automation when you configure the webhook. See the Configure a webhook section for more information.
Once you create a secret, you can access that secret in your W&B workflows with $.
If you use secrets in W&B Server, you are responsible for configuring security measures that satisfy your security needs.
W&B strongly recommends that you store secrets in a W&B instance of a cloud secrets manager provided by AWS, GCP, or Azure. Secret managers provided by AWS, GCP, and Azure are configured with advanced security capabilities.
W&B does not recommend that you use a Kubernetes cluster as the backend of your secrets store. Consider a Kubernetes cluster only if you are not able to use a W&B instance of a cloud secrets manager (AWS, GCP, or Azure), and you understand how to prevent security vulnerabilities that can occur if you use a cluster.
Configure a webhook
Before you can use a webhook, first configure that webhook in the W&B App UI.
Only W&B Admins can configure a webhook for a W&B Team.
Ensure you already created one or more secrets if your webhook requires additional secret keys or tokens to authenticate your webhook.
Navigate to the W&B App UI.
Click on Team Settings.
Scroll down the page until you find the Webhooks section.
Click on the New webhook button.
Provide a name for your webhook in the Name field.
Provide the endpoint URL for the webhook in the URL field.
(Optional) From the Secret dropdown menu, select the secret you want to use to authenticate the webhook payload.
(Optional) From the Access token dropdown menu, select the access token you want to use to authorize the sender.
(Optional) From the Access token dropdown menu select additional secret keys or tokens required to authenticate a webhook (such as an access token).
See the Troubleshoot your webhook section to view where the secret and access token are specified in
the POST request.
Add a webhook
Once you have a webhook configured and (optionally) a secret, navigate to the Model Registry App at https://wandb.ai/registry/model.
From the Event type dropdown, select an event type.
(Optional) If you selected A new version is added to a registered model event, provide the name of a registered model from the Registered model dropdown.
Select Webhooks from the Action type dropdown.
Click on the Next step button.
Select a webhook from the Webhook dropdown.
(Optional) Provide a payload in the JSON expression editor. See the Example payload section for common use case examples.
Click on Next step.
Provide a name for your webhook automation in the Automation name field.
(Optional) Provide a description for your webhook.
Click on the Create automation button.
Example payloads
The following tabs demonstrate example payloads based on common use cases. Within the examples they reference the following keys to refer to condition objects in the payload parameters:
${event_type} Refers to the type of event that triggered the action.
${event_author} Refers to the user that triggered the action.
${artifact_version} Refers to the specific artifact version that triggered the action. Passed as an artifact instance.
${artifact_version_string} Refers to the specific artifact version that triggered the action. Passed as a string.
${artifact_collection_name} Refers to the name of the artifact collection that the artifact version is linked to.
${project_name} Refers to the name of the project owning the mutation that triggered the action.
${entity_name} Refers to the name of the entity owning the mutation that triggered the action.
Verify that your access tokens have required set of permissions to trigger your GHA workflow. For more information, see these GitHub Docs.
Send a repository dispatch from W&B to trigger a GitHub action. For example, suppose you have workflow that accepts a repository dispatch as a trigger for the on key:
on:
repository_dispatch:
types: BUILD_AND_DEPLOY
The payload for the repository might look something like:
The event_type key in the webhook payload must match the types field in the GitHub workflow YAML file.
The contents and positioning of rendered template strings depends on the event or model version the automation is configured for. ${event_type} will render as either LINK_ARTIFACT or ADD_ARTIFACT_ALIAS. See below for an example mapping:
Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume the registered model artifacts through the W&B API.
Review a W&B report, which illustrates how to use a Github Actions webhook automation for Model CI. Check out this GitHub repository to learn how to create model CI with a Modal Labs webhook.
Configure an ‘Incoming Webhook’ to get the webhook URL for your Teams Channel by configuring. The following is an example payload:
{
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"summary": "New Notification",
"sections": [
{
"activityTitle": "Notification from WANDB",
"text": "This is an example message sent via Teams webhook.",
"facts": [
{
"name": "Author",
"value": "${event_author}" },
{
"name": "Event Type",
"value": "${event_type}" }
],
"markdown": true }
]
}
You can use template strings to inject W&B data into your payload at the time of execution (as shown in the Teams example above).
Setup your Slack app and add an incoming webhook integration with the instructions highlighted in the Slack API documentation. Ensure that you have the secret specified under Bot User OAuth Token as your W&B webhook’s access token.
Interactively troubleshoot your webhook with the W&B App UI or programmatically with a Bash script. You can troubleshoot a webhook when you create a new webhook or edit an existing webhook.
Interactively test a webhook with the W&B App UI.
Navigate to your W&B Team Settings page.
Scroll to the Webhooks section.
Click on the horizontal three docs (meatball icon) next to the name of your webhook.
Select Test.
From the UI panel that appears, paste your POST request to the field that appears.
Click on Test webhook.
Within the W&B App UI, W&B posts the response made by your endpoint.
The following bash script generates a POST request similar to the POST request W&B sends to your webhook automation when it is triggered.
Copy and paste the code below into a shell script to troubleshoot your webhook. Specify your own values for the following:
ACCESS_TOKEN
SECRET
PAYLOAD
API_ENDPOINT
#!/bin/bash
# Your access token and secretACCESS_TOKEN="your_api_key"SECRET="your_api_secret"# The data you want to send (for example, in JSON format)PAYLOAD='{"key1": "value1", "key2": "value2"}'# Generate the HMAC signature# For security, Wandb includes the X-Wandb-Signature in the header computed # from the payload and the shared secret key associated with the webhook # using the HMAC with SHA-256 algorithm.SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$SECRET" -binary | base64)# Make the cURL requestcurl -X POST \
-H "Content-Type: application/json"\
-H "Authorization: Bearer $ACCESS_TOKEN"\
-H "X-Wandb-Signature: $SIGNATURE"\
-d "$PAYLOAD" API_ENDPOINT
View automation
View automations associated to a registered model from the W&B App UI.
Scroll to the bottom of the page to the Automations section.
Hover your mouse next to the name of the automation and click on the kebob (three vertical dots) menu.
Select Delete.
2.4.2 - Trigger CI/CD events when artifact changes
Use an project scoped artifact automation in your project to trigger actions when aliases or versions in an artifact collection are created or changed.
Create an automation that triggers when an artifact is changed. Use artifact automations when you want to automate downstream actions for versioning artifacts. To create an automation, define the action you want to occur based on an event type.
Artifact automations are scoped to a project. This means that only events within a project will trigger an artifact automation.
This is in contrast to automations created in the W&B Model Registry. Automations created in the model registry are in scope of the Model Registry. They are triggered when events are performed on model versions linked to the Model Registry. For information on how to create an automations for model versions, see the Automations for Model CI/CD page in the Model Registry chapter.
Event types
An event is a change that takes place in the W&B ecosystem. You can define two different event types for artifact collections in your project: A new version of an artifact is created in a collection and An artifact alias is added.
Use the A new version of an artifact is created in a collection event type for applying recurring actions to each version of an artifact. For example, you can create an automation that automatically starts a training job when a new dataset artifact version is created.
Use the An artifact alias is added event type to create an automation that activates when a specific alias is applied to an artifact version. For example, you could create an automation that triggers an action when someone adds “test-set-quality-check” alias to an artifact that then triggers downstream processing on that dataset.
Create a webhook automation
Automate a webhook based on an action with the W&B App UI. To do this, you will first establish a webhook, then you will configure the webhook automation.
Specify an endpoint for your webhook that has an Address record (A record). W&B does not support connecting to endpoints that are exposed directly with IP addresses such as [0-255].[0-255].[0-255].[0.255] or endpoints exposed as localhost. This restriction helps protect against server-side request forgery (SSRF) attacks and other related threat vectors.
Add a secret for authentication or authorization
Secrets are team-level variables that let you obfuscate private strings such as credentials, API keys, passwords, tokens, and more. W&B recommends you use secrets to store any string that you want to protect the plain text content of.
To use a secret in your webhook, you must first add that secret to your team’s secret manager.
Only W&B Admins can create, edit, or delete a secret.
Skip this section if the external server you send HTTP POST requests to does not use secrets.
Secrets are also available if you use W&B Server in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type.
There are two types of secrets W&B suggests that you create when you use a webhook automation:
Access tokens: Authorize senders to help secure webhook requests
Secret: Ensure the authenticity and integrity of data transmitted from payloads
Follow the instructions below to create a webhook:
Navigate to the W&B App UI.
Click on Team Settings.
Scroll down the page until you find the Team secrets section.
Click on the New secret button.
A modal will appear. Provide a name for your secret in the Secret name field.
Add your secret into the Secret field.
(Optional) Repeat steps 5 and 6 to create another secret (such as an access token) if your webhook requires additional secret keys or tokens to authenticate your webhook.
Specify the secrets you want to use for your webhook automation when you configure the webhook. See the Configure a webhook section for more information.
Once you create a secret, you can access that secret in your W&B workflows with $.
Configure a webhook
Before you can use a webhook, you will first need to configure that webhook in the W&B App UI.
Only W&B Admins can configure a webhook for a W&B Team.
Ensure you already created one or more secrets if your webhook requires additional secret keys or tokens to authenticate your webhook.
Navigate to the W&B App UI.
Click on Team Settings.
Scroll down the page until you find the Webhooks section.
Click on the New webhook button.
Provide a name for your webhook in the Name field.
Provide the endpoint URL for the webhook in the URL field.
(Optional) From the Secret dropdown menu, select the secret you want to use to authenticate the webhook payload.
(Optional) From the Access token dropdown menu, select the access token you want to use to authorize the sender.
(Optional) From the Access token dropdown menu select additional secret keys or tokens required to authenticate a webhook (such as an access token).
See the Troubleshoot your webhook section to view where the secret and access token are specified in
the POST request.
Add a webhook
Once you have a webhook configured and (optionally) a secret, navigate to your project workspace. Click on the Automations tab on the left sidebar.
From the Event type dropdown, select an event type.
If you selected A new version of an artifact is created in a collection event, provide the name of the artifact collection that the automation should respond to from the Artifact collection dropdown.
Select Webhooks from the Action type dropdown.
Click on the Next step button.
Select a webhook from the Webhook dropdown.
(Optional) Provide a payload in the JSON expression editor. See the Example payload section for common use case examples.
Click on Next step.
Provide a name for your webhook automation in the Automation name field.
(Optional) Provide a description for your webhook.
Click on the Create automation button.
Example payloads
The following tabs demonstrate example payloads based on common use cases. Within the examples they reference the following keys to refer to condition objects in the payload parameters:
${event_type} Refers to the type of event that triggered the action.
${event_author} Refers to the user that triggered the action.
${artifact_version} Refers to the specific artifact version that triggered the action. Passed as an artifact instance.
${artifact_version_string} Refers to the specific artifact version that triggered the action. Passed as a string.
${artifact_collection_name} Refers to the name of the artifact collection that the artifact version is linked to.
${project_name} Refers to the name of the project owning the mutation that triggered the action.
${entity_name} Refers to the name of the entity owning the mutation that triggered the action.
Verify that your access tokens have required set of permissions to trigger your GHA workflow. For more information, see these GitHub Docs.
Send a repository dispatch from W&B to trigger a GitHub action. For example, suppose you have workflow that accepts a repository dispatch as a trigger for the on key:
on:
repository_dispatch:
types: BUILD_AND_DEPLOY
The payload for the repository might look something like:
The event_type key in the webhook payload must match the types field in the GitHub workflow YAML file.
The contents and positioning of rendered template strings depends on the event or model version the automation is configured for. ${event_type} will render as either LINK_ARTIFACT or ADD_ARTIFACT_ALIAS. See below for an example mapping:
Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume W&B artifacts through the W&B API.
Configure an ‘Incoming Webhook’ to get the webhook URL for your Teams Channel by configuring. The following is an example payload:
{
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"summary": "New Notification",
"sections": [
{
"activityTitle": "Notification from WANDB",
"text": "This is an example message sent via Teams webhook.",
"facts": [
{
"name": "Author",
"value": "${event_author}" },
{
"name": "Event Type",
"value": "${event_type}" }
],
"markdown": true }
]
}
You can use template strings to inject W&B data into your payload at the time of execution (as shown in the Teams example above).
Setup your Slack app and add an incoming webhook integration with the instructions highlighted in the Slack API documentation. Ensure that you have the secret specified under Bot User OAuth Token as your W&B webhook’s access token.
Interactively troubleshoot your webhook with the W&B App UI or programmatically with a Bash script. You can troubleshoot a webhook when you create a new webhook or edit an existing webhook.
Interactively test a webhook with the W&B App UI.
Navigate to your W&B Team Settings page.
Scroll to the Webhooks section.
Click on the horizontal three docs (meatball icon) next to the name of your webhook.
Select Test.
From the UI panel that appears, paste your POST request to the field that appears.
Click on Test webhook.
Within the W&B App UI, W&B posts the response made by your endpoint.
The following bash script generates a POST request similar to the POST request W&B sends to your webhook automation when it is triggered.
Copy and paste the code below into a shell script to troubleshoot your webhook. Specify your own values for the following:
ACCESS_TOKEN
SECRET
PAYLOAD
API_ENDPOINT
View an automation
View automations associated to an artifact from the W&B App UI.
Navigate to your project workspace on the W&B App.
Click on the Automations tab on the left sidebar.
Within the Automations section you can find the following properties for each automations that was created in your project"
Trigger type: The type of trigger that was configured.
Action type: The action type that triggers the automation.
Action name: The action name you provided when you created the automation.
Queue: The name of the queue the job was enqueued to. This field is left empty if you selected a webhook action type.
Delete an automation
Delete an automation associated with a artifact. Actions in progress are not affected if you delete that automation before the action completes.
Navigate to your project workspace on the W&B App.
Click on the Automations tab on the left sidebar.
From the list, select the name of the automation you want to view.
Hover your mouse next to the name of the automation and click on the kebob (three vertical dots) menu.
Select Delete.
2.5 - W&B App UI Reference
2.5.1 - Panels
Use workspace panel visualizations to explore your logged data by key, visualize the relationships between hyperparameters and output metrics, and more.
Workspace modes
W&B projects support two different workspace modes. The icon next to the workspace name shows its mode.
Icon
Workspace mode
Automated workspaces automatically generate panels for all keys logged in the project. This can help you get started by visualizing all available data for the project.
Manual workspaces start as blank slates and display only those panels intentionally added by users. Choose a manual workspace when you care mainly about a fraction of the keys logged in the project, or for a more focused analysis.
To undo changes to your workspace, click the Undo button (arrow that points left) or type CMD + Z (macOS) or CTRL + Z (Windows / Linux).
Reset a workspace
To reset a workspace:
At the top of the workspace, click the action menu ....
Click Reset workspace.
Add panels
You can add panels to your workspace, either globally or at the section level.
To add a panel:
To add a panel globally, click Add panels in the control bar near the panel search field.
To add a panel directly to a section instead, click the section’s action ... menu, then click + Add panels.
Select the type of panel to add.
Quick add
Quick Add allows you to select a key in the project from a list to generate a standard panel for it.
For an automated workspace with no deleted panels, Quick add is not available. You can use Quick add to re-add a panel that you deleted.
Custom panel add
To add a custom panel to your workspace:
Select the type of panel you’d like to create.
Follow the prompts to configure the panel.
To learn more about the options for each type of panel, refer to the relevant section below, such as Line plots or Bar plots.
Manage panels
Edit a panel
To edit a panel:
Click its pencil icon.
Modify the panel’s settings.
To change the panel to a different type, select the type and then configure the settings.
Click Apply.
Move a panel
To move a panel to a different section, you can use the drag handle on the panel. To select the new section from a list instead:
If necessary, create a new section by clicking Add section after the last section.
Click the action ... menu for the panel.
Click Move, then select a new section.
You can also use the drag handle to rearrange panels within a section.
Duplicate a panel
To duplicate a panel:
At the top of the panel, click the action ... menu.
Click Duplicate.
If desired, you can customize or move the duplicated panel.
Remove panels
To remove a panel:
Hover your mouse over the panel.
Select the action ... menu.
Click Delete.
To remove all panels from a manual workspace, click its action ... menu, then click Clear all panels.
To remove all panels from an automatic or manual workspace, you can reset the workspace. Select Automatic to start with the default set of panels, or select Manual to start with an empty workspace with no panels.
Share a full-screen panel directly
Direct colleagues to a specific panel in your project. The link redirects users to a full screen view of that panel when they click that link. To create a link to a panel:
Hover your mouse over the panel.
Select the action ... menu.
Click Copy panel URL.
The settings of the project determine who can view the panel. This means that if the project is private, only members of the project can view the panel. If the project is public, anyone with the link can view the panel.
If multiple panels have the same name, W&B shares the first panel with the name.
Manage sections
By default, sections in a workspace reflect the logging hierarchy of your keys. However, in a manual workspace, sections appear only after you start adding panels.
Add a section
To add a section, click Add section after the last section.
To add a new section before or after an existing section, you can instead click the section’s action ... menu, then click New section below or New section above.
Rename a section
To rename a section, click its action ... menu, then click Rename section.
Delete a section
To delete a section, click its ... menu, then click Delete section. This removes the section and its panels.
2.5.1.1 - Line plots
Visualize metrics, customize axes, and compare multiple lines on a plot
Line plots show up by default when you plot metrics over time with wandb.log(). Customize with chart settings to compare multiple lines on the same plot, calculate custom axes, and rename labels.
Edit line panel settings
Hover your mouse over the panel you want to edit its settings for. Select the pencil icon that appears. Within the modal that appears, select a tab to edit the Data, Grouping, Chart, Expressions, or Legend settings.
Data
Select the Data tab to edit the x-axis, y-axis, smoothing filter, point aggregation and more. The proceeding describes some of the options you can edit:
If you’d like to use a custom x-axis, make sure it’s logged in the same call to wandb.log() that you use to log the y-axis.
X axis: By default the x-axis is set to Step. You can change the x-axis to Relative Time, or select a custom axis based on values you log with W&B.
Relative Time (Wall) is clock time since the process started, so if you started a run and resumed it a day later and logged something that would be plotted a 24hrs.
Relative Time (Process) is time inside the running process, so if you started a run and ran for 10 seconds and resumed a day later that point would be plotted at 10s
Wall Time is minutes elapsed since the start of the first run on the graph
Step increments by default each time wandb.log() is called, and is supposed to reflect the number of training steps you’ve logged from your model
Y axes: Select y-axes from the logged values, including metrics and hyperparameters that change over time.
Min, max, and log scale: Minimum, maximum, and log scale settings for x axis and y axis in line plots
Smoothing: Change the smoothing on the line plot.
Outliers: Rescale to exclude outliers from the default plot min and max scale
Max runs to show: Show more lines on the line plot at once by increasing this number, which defaults to 10 runs. You’ll see the message “Showing first 10 runs” on the top of the chart if there are more than 10 runs available but the chart is constraining the number visible.
Chart type: Change between a line plot, an area plot, and a percentage area plot
Pick multiple y-axes in the line plot settings to compare different metrics on the same chart, like accuracy and validation accuracy for example.
Grouping
Select the Grouping tab to use group by methods to organize your data.
Group by: Select a column, and all the runs with the same value in that column will be grouped together.
Agg: Aggregation— the value of the line on the graph. The options are mean, median, min, and max of the group.
Chart
Select the Chart tab to edit the plot’s title, axis titles, legend, and more.
Title: Add a custom title for line plot, which shows up at the top of the chart
X-Axis title: Add a custom title for the x-axis of the line plot, which shows up in the lower right corner of the chart.
Y-Axis title: Add a custom title for the y-axis of the line plot, which shows up in the upper left corner of the chart.
Show legend: Toggle legend on or off
Font size: Change the font size of the chart title, x-axis title, and y-axis title
Legend position: Change the position of the legend on the chart
Legend
Legend: Select field that you want to see in the legend of the plot for each line. You could, for example, show the name of the run and the learning rate.
Legend template: Fully customizable, this powerful template allows you to specify exactly what text and variables you want to show up in the template at the top of the line plot as well as the legend that appears when you hover your mouse over the plot.
Expressions
Y Axis Expressions: Add calculated metrics to your graph. You can use any of the logged metrics as well as configuration values like hyperparameters to calculate custom lines.
X Axis Expressions: Rescale the x-axis to use calculated values using custom expressions. Useful variables include**_step** for the default x-axis, and the syntax for referencing summary values is ${summary:value}
Visualize average values on a plot
If you have several different experiments and you’d like to see the average of their values on a plot, you can use the Grouping feature in the table. Click “Group” above the run table and select “All” to show averaged values in your graphs.
Here is what the graph looks like before averaging:
The proceeding image shows a graph that represents average values across runs using grouped lines.
Visualize NaN value on a plot
You can also plot NaN values including PyTorch tensors on a line plot with wandb.log. For example:
wandb.log({"test": [..., float("nan"), ...]})
Compare two metrics on one chart
Select the Add panels button in the top right corner of the page.
From the left panel that appears, expand the Evaluation dropdown.
Select Run comparer
Change the color of the line plots
Sometimes the default color of runs is not helpful for comparison. To help overcome this, wandb provides two instances with which one can manually change the colors.
Each run is given a random color by default upon initialization.
Upon clicking any of the colors, a color palette appears from which we can manually choose the color we want.
Hover your mouse over the panel you want to edit its settings for.
Select the pencil icon that appears.
Choose the Legend tab.
Visualize on different x axes
If you’d like to see the absolute time that an experiment has taken, or see what day an experiment ran, you can switch the x axis. Here’s an example of switching from steps to relative time and then to wall time.
Area plots
In the line plot settings, in the advanced tab, click on different plot styles to get an area plot or a percentage area plot.
Zoom
Click and drag a rectangle to zoom vertically and horizontally at the same time. This changes the x-axis and y-axis zoom.
Hide chart legend
Turn off the legend in the line plot with this simple toggle:
2.5.1.1.1 - Line plot reference
X-Axis
You can set the x-axis of a line plot to any value that you have logged with W&B.log as long as it’s always logged as a number.
Y-Axis variables
You can set the y-axis variables to any value you have logged with wandb.log as long as you were logging numbers, arrays of numbers or a histogram of numbers. If you logged more than 1500 points for a variable, W&B samples down to 1500 points.
You can change the color of your y axis lines by changing the color of the run in the runs table.
X range and Y range
You can change the maximum and minimum values of X and Y for the plot.
X range default is from the smallest value of your x-axis to the largest.
Y range default is from the smallest value of your metrics and zero to the largest value of your metrics.
Max runs/groups
By default you will only plot 10 runs or groups of runs. The runs will be taken from the top of your runs table or run set, so if you sort your runs table or run set you can change the runs that are shown.
Legend
You can control the legend of your chart to show for any run any config value that you logged and meta data from the runs such as the created at time or the user who created the run.
Example:
${run:displayName} - ${config:dropout} will make the legend name for each run something like royal-sweep - 0.5 where royal-sweep is the run name and 0.5 is the config parameter named dropout.
You can set value inside[[ ]] to display point specific values in the crosshair when hovering over a chart. For example \[\[ $x: $y ($original) ]] would display something like “2: 3 (2.9)”
Supported values inside [[ ]] are as follows:
Value
Meaning
${x}
X value
${y}
Y value (Including smoothing adjustment)
${original}
Y value not including smoothing adjustment
${mean}
Mean of grouped runs
${stddev}
Standard Deviation of grouped runs
${min}
Min of grouped runs
${max}
Max of grouped runs
${percent}
Percent of total (for stacked area charts)
Grouping
You can aggregate all of the runs by turning on grouping, or group over an individual variable. You can also turn on grouping by grouping inside the table and the groups will automatically populate into the graph.
Smoothing
You can set the smoothing coefficient to be between 0 and 1 where 0 is no smoothing and 1 is maximum smoothing.
Ignore outliers
Ignore outliers sets the graph’s Y-axis range from 5% to 95%, rather than from 0% to 100% to make all data visible.
Expression
Expression lets you plot values derived from metrics like 1-accuracy. It currently only works if you are plotting a single metric. You can do simple arithmetic expressions, +, -, *, / and % as well as ** for powers.
Plot style
Select a style for your line plot.
Line plot:
Area plot:
Percentage area plot:
2.5.1.1.2 - Point aggregation
Use point aggregation methods within your line plots for improved data visualization accuracy and performance. There are two types of point aggregation modes: full fidelity and random sampling. W&B uses full fidelity mode by default.
Full fidelity
When you use full fidelity mode, W&B breaks the x-axis into dynamic buckets based on the number of data points. It then calculates the minimum, maximum, and average values within each bucket while rendering a point aggregation for the line plot.
There are three main advantages to using full fidelity mode for point aggregation:
Preserve extreme values and spikes: retain extreme values and spikes in your data
Configure how minimum and maximum points render: use the W&B App to interactively decide whether you want to show extreme (min/max) values as a shaded area.
Explore your data without losing data fidelity: W&B recalculate x-axis bucket sizes when you zoom into specific data points. This helps ensure that you can explore your data without losing accuracy. Caching is used to store previously computed aggregations to help reduce loading times which is particularly useful if you are navigating through large datasets.
Configure how minimum and maximum points render
Show or hide minimum and maximum values with shaded areas around your line plots.
The proceeding image shows a blue line plot. The light blue shaded area represents the minimum and maximum values for each bucket.
There are three ways to render minimum and maximum values in your line plots:
Never: The min/max values are not displayed as a shaded area. Only show the aggregated line across the x-axis bucket.
On hover: The shaded area for min/max values appears dynamically when you hover over the chart. This option keeps the view uncluttered while allowing you to inspect ranges interactively.
Always: The min/max shaded area is consistently displayed for every bucket in the chart, helping you visualize the full range of values at all times. This can introduce visual noise if there are many runs visualized in the chart.
By default, the minimum and maximum values are not displayed as shaded areas. To view one of the shaded area options, follow these steps:
Navigate to your W&B project
Select on the Workspace icon on the left tab
Select the gear icon on the top right corner of the screen next to the left of the Add panels button.
From the UI slider that appears, select Line plots
Within the Point aggregation section, choose On over or Always from the Show min/max values as a shaded area dropdown menu.
Navigate to your W&B project
Select on the Workspace icon on the left tab
Select the line plot panel you want to enable full fidelity mode for
Within the modal that appears, select On hover or Always from the Show min/max values as a shaded area dropdown menu.
Explore your data without losing data fidelity
Analyze specific regions of the dataset without missing critical points like extreme values or spikes. When you zoom in on a line plot, W&B adjusts the buckets sizes used to calculate the minimum, maximum, and average values within each bucket.
W&B divides the x-axis is dynamically into 1000 buckets by default. For each bucket, W&B calculates the following values:
Minimum: The lowest value in that bucket.
Maximum: The highest value in that bucket.
Average: The mean value of all points in that bucket.
W&B plots values in buckets in a way that preserves full data representation and includes extreme values in every plot. When zoomed in to 1,000 points or fewer, full fidelity mode renders every data point without additional aggregation.
To zoom in on a line plot, follow these steps:
Navigate to your W&B project
Select on the Workspace icon on the left tab
Optionally add a line plot panel to your workspace or navigate to an existing line plot panel.
Click and drag to select a specific region to zoom in on.
Line plot grouping and expressions
When you use Line Plot Grouping, W&B applies the following based on the mode selected:
Non-windowed sampling (grouping): Aligns points across runs on the x-axis. The average is taken if multiple points share the same x-value; otherwise, they appear as discrete points.
Windowed sampling (grouping and expressions): Divides the x-axis either into 250 buckets or the number of points in the longest line (whichever is smaller). W&B takes an average of points within each bucket.
Full fidelity (grouping and expressions): Similar to non-windowed sampling, but fetches up to 500 points per run to balance performance and detail.
Random sampling
Random sampling uses 1500 randomly sampled points to render line plots. Random sampling is useful for performance reasons when you have a large number of data points.
Random sampling samples non-deterministically. This means that random sampling sometimes excludes important outliers or spikes in the data and therefore reduces data accuracy.
Enable random sampling
By default, W&B uses full fidelity mode. To enable random sampling, follow these steps:
Navigate to your W&B project
Select on the Workspace icon on the left tab
Select the gear icon on the top right corner of the screen next to the left of the Add panels button.
From the UI slider that appears, select Line plots
Choose Random sampling from the Point aggregation section
Navigate to your W&B project
Select on the Workspace icon on the left tab
Select the line plot panel you want to enable random sampling for
Within the modal that appears, select Random sampling from the Point aggregation method section
Access non sampled data
You can access the complete history of metrics logged during a run using the W&B Run API. The following example demonstrates how to retrieve and process the loss values from a specific run:
# Initialize the W&B APIrun = api.run("l2k2/examples-numpy-boston/i0wt6xua")
# Retrieve the history of the 'Loss' metrichistory = run.scan_history(keys=["Loss"])
# Extract the loss values from the historylosses = [row["Loss"] for row in history]
2.5.1.1.3 - Smooth line plots
In line plots, use smoothing to see trends in noisy data.
Exponential smoothing is a technique for smoothing time series data by exponentially decaying the weight of previous points. The range is 0 to 1. See Exponential Smoothing for background. There is a de-bias term added so that early values in the time series are not biased towards zero.
The EMA algorithm takes the density of points on the line (the number of y values per unit of range on x-axis) into account. This allows consistent smoothing when displaying multiple lines with different characteristics simultaneously.
Here is sample code for how this works under the hood:
constsmoothingWeight= Math.min(Math.sqrt(smoothingParam||0), 0.999);
letlastY=yValues.length>0?0:NaN;
letdebiasWeight=0;
returnyValues.map((yPoint, index) => {
constprevX=index>0?index-1:0;
// VIEWPORT_SCALE scales the result to the chart's x-axis range
constchangeInX= ((xValues[index] -xValues[prevX]) /rangeOfX) *VIEWPORT_SCALE;
constsmoothingWeightAdj= Math.pow(smoothingWeight, changeInX);
lastY=lastY*smoothingWeightAdj+yPoint;
debiasWeight=debiasWeight*smoothingWeightAdj+1;
returnlastY/debiasWeight;
});
Gaussian smoothing (or gaussian kernel smoothing) computes a weighted average of the points, where the weights correspond to a gaussian distribution with the standard deviation specified as the smoothing parameter. See . The smoothed value is calculated for every input x value.
Gaussian smoothing is a good standard choice for smoothing if you are not concerned with matching TensorBoard’s behavior. Unlike an exponential moving average the point will be smoothed based on points occurring both before and after the value.
Running average is a smoothing algorithm that replaces a point with the average of points in a window before and after the given x value. See “Boxcar Filter” at https://en.wikipedia.org/wiki/Moving_average. The selected parameter for running average tells Weights and Biases the number of points to consider in the moving average.
Consider using Gaussian Smoothing if your points are spaced unevenly on the x-axis.
The following image demonstrates how a running app looks like in the app:
Exponential Moving Average (Deprecated)
The TensorBoard EMA algorithm has been deprecated as it cannot accurately smooth multiple lines on the same chart that do not have a consistent point density (number of points plotted per unit of x-axis).
Exponential moving average is implemented to match TensorBoard’s smoothing algorithm. The range is 0 to 1. See Exponential Smoothing for background. There is a debias term added so that early values in the time series are not biases towards zero.
Here is sample code for how this works under the hood:
All of the smoothing algorithms run on the sampled data, meaning that if you log more than 1500 points, the smoothing algorithm will run after the points are downloaded from the server. The intention of the smoothing algorithms is to help find patterns in data quickly. If you need exact smoothed values on metrics with a large number of logged points, it may be better to download your metrics through the API and run your own smoothing methods.
Hide original data
By default we show the original, unsmoothed data as a faint line in the background. Click the Show Original toggle to turn this off.
2.5.1.2 - Bar plots
Visualize metrics, customize axes, and compare categorical data as bars.
A bar plot presents categorical data with rectangular bars which can be plotted vertically or horizontally. Bar plots show up by default with wandb.log() when all logged values are of length one.
Customize with chart settings to limit max runs to show, group runs by any config and rename labels.
Customize bar plots
You can also create Box or Violin Plots to combine many summary statistics into one chart type**.**
Group runs via runs table.
Click ‘Add panel’ in the workspace.
Add a standard ‘Bar Chart’ and select the metric to plot.
Under the ‘Grouping’ tab, pick ‘box plot’ or ‘Violin’, etc. to plot either of these styles.
2.5.1.3 - Parallel coordinates
Compare results across machine learning experiments
Parallel coordinates charts summarize the relationship between large numbers of hyperparameters and model metrics at a glance.
Lines: Each line represents a single run. Mouse over a line to see a tooltip with details about the run. All lines that match the current filters will be shown, but if you turn off the eye, lines will be grayed out.
Panel Settings
Configure these features in the panel settings— click the edit button in the upper right corner of the panel.
Tooltip: On hover, a legend shows up with info on each run
Titles: Edit the axis titles to be more readable
Gradient: Customize the gradient to be any color range you like
Log scale: Each axis can be set to view on a log scale independently
Flip axis: Switch the axis direction— this is useful when you have both accuracy and loss as columns
Use the scatter plot to compare multiple runs and visualize how your experiments are performing. We’ve added some customizable features:
Plot a line along the min, max, and average
Custom metadata tooltips
Control point colors
Set axes ranges
Switch axes to log scale
Here’s an example of validation accuracy of different models over a couple of weeks of experimentation. The tooltip is customized to include the batch size and dropout as well as the values on the axes. There’s also a line plotting the running average of validation accuracy. See a live example →
2.5.1.5 - Save and diff code
By default, W&B only saves the latest git commit hash. You can turn on more code features to compare the code between your experiments dynamically in the UI.
Starting with wandb version 0.8.28, W&B can save the code from your main training file where you call wandb.init().
Save library code
When you enable code saving, W&B saves the code from the file that called wandb.init(). To save additional library code, you have three options:
Call wandb.run.log_code(".") after calling wandb.init()
import wandb
wandb.init()
wandb.run.log_code(".")
Pass a settings object to wandb.init with code_dir set
This captures all python source code files in the current directory and all subdirectories as an artifact. For more control over the types and locations of source code files that are saved, see the reference docs.
Set code saving in the UI
In addition to setting code saving programmatically, you can also toggle this feature in your W&B account Settings. Note that this will enable code saving for all teams associated with your account.
By default, W&B disables code saving for all teams.
Log in to your W&B account.
Go to Settings > Privacy.
Under Project and content security, toggle Disable default code saving on.
Code comparer
Compare code used in different W&B runs:
Select the Add panels button in the top right corner of the page.
Expand TEXT AND CODE dropdown and select Code.
Jupyter session history
W&B saves the history of code executed in your Jupyter notebook session. When you call wandb.init() inside of Jupyter, W&B adds a hook to automatically save a Jupyter notebook containing the history of code executed in your current session.
Navigate to your project workspaces that contains your code.
Select the Artifacts tab in the left navigation bar.
Expand the code artifact.
Select the Files tab.
This displays the cells that were run in your session along with any outputs created by calling iPython’s display method. This enables you to see exactly what code was run within Jupyter in a given run. When possible W&B also saves the most recent version of the notebook which you would find in the code directory as well.
2.5.1.6 - Parameter importance
Visualize the relationships between your model’s hyperparameters and output metrics
Discover which of your hyperparameters were the best predictors of, and highly correlated to desirable values of your metrics.
Correlation is the linear correlation between the hyperparameter and the chosen metric (in this case val_loss). So a high correlation means that when the hyperparameter has a higher value, the metric also has higher values and vice versa. Correlation is a great metric to look at but it can’t capture second order interactions between inputs and it can get messy to compare inputs with wildly different ranges.
Therefore W&B also calculates an importance metric. W&B trains a random forest with the hyperparameters as inputs and the metric as the target output and report the feature importance values for the random forest.
The idea for this technique was inspired by a conversation with Jeremy Howard who has pioneered the use of random forest feature importances to explore hyperparameter spaces at Fast.ai. W&B highly recommends you check out this lecture (and these notes) to learn more about the motivation behind this analysis.
Hyperparameter importance panel untangles the complicated interactions between highly correlated hyperparameters. In doing so, it helps you fine tune your hyperparameter searches by showing you which of your hyperparameters matter the most in terms of predicting model performance.
Creating a hyperparameter importance panel
Navigate to your W&B project.
Select Add panels buton.
Expand the CHARTS dropdown, choose Parallel coordinates from the dropdown.
If an empty panel appears, make sure that your runs are ungrouped
With the parameter manager, we can manually set the visible and hidden parameters.
Interpreting a hyperparameter importance panel
This panel shows you all the parameters passed to the wandb.config object in your training script. Next, it shows the feature importances and correlations of these config parameters with respect to the model metric you select (val_loss in this case).
Importance
The importance column shows you the degree to which each hyperparameter was useful in predicting the chosen metric. Imagine a scenario were you start tuning a plethora of hyperparameters and using this plot to hone in on which ones merit further exploration. The subsequent sweeps can then be limited to the most important hyperparameters, thereby finding a better model faster and cheaper.
W&B calculate importances using a tree based model rather than a linear model as the former are more tolerant of both categorical data and data that’s not normalized.
In the preceding image, you can see that epochs, learning_rate, batch_size and weight_decay were fairly important.
Correlations
Correlations capture linear relationships between individual hyperparameters and metric values. They answer the question of whether there a significant relationship between using a hyperparameter, such as the SGD optimizer, and the val_loss (the answer in this case is yes). Correlation values range from -1 to 1, where positive values represent positive linear correlation, negative values represent negative linear correlation and a value of 0 represents no correlation. Generally a value greater than 0.7 in either direction represents strong correlation.
You might use this graph to further explore the values that are have a higher correlation to our metric (in this case you might pick stochastic gradient descent or adam over rmsprop or nadam) or train for more epochs.
correlations show evidence of association, not necessarily causation.
correlations are sensitive to outliers, which might turn a strong relationship to a moderate one, specially if the sample size of hyperparameters tried is small.
and finally, correlations only capture linear relationships between hyperparameters and metrics. If there is a strong polynomial relationship, it won’t be captured by correlations.
The disparities between importance and correlations result from the fact that importance accounts for interactions between hyperparameters, whereas correlation only measures the affects of individual hyperparameters on metric values. Secondly, correlations capture only the linear relationships, whereas importances can capture more complex ones.
As you can see both importance and correlations are powerful tools for understanding how your hyperparameters influence model performance.
2.5.1.7 - Compare run metrics
Compare metrics across multiple runs
Use the Run Comparer to see what metrics are different across your runs.
Select the Add panels button in the top right corner of the page.
From the left panel that appears, expand the Evaluation dropdown.
Select Run comparer
Toggle the diff only option to hide rows where the values are the same across runs.
2.5.1.8 - Query panels
Some features on this page are in beta, hidden behind a feature flag. Add weave-plot to your bio on your profile page to unlock all related features.
Looking for W&B Weave? W&B’s suite of tools for Generative AI application building? Find the docs for weave here: wandb.me/weave.
Use query panels to query and interactively visualize your data.
Create a query panel
Add a query to your workspace or within a report.
Navigate to your project’s workspace.
In the upper right hand corner, click Add panel.
From the dropdown, select Query panel.
Type and select /Query panel.
Alternatively, you can associate a query with a set of runs:
Within your report, type and select /Panel grid.
Click the Add panel button.
From the dropdown, select Query panel.
Query components
Expressions
Use query expressions to query your data stored in W&B such as runs, artifacts, models, tables, and more.
Example: Query a table
Suppose you want to query a W&B Table. In your training code you log a table called "cifar10_sample_table":
Within the query panel you can query your table with:
runs.summary["cifar10_sample_table"]
Breaking this down:
runs is a variable automatically injected in Query Panel Expressions when the Query Panel is in a Workspace. Its “value” is the list of runs which are visible for that particular Workspace. Read about the different attributes available within a run here.
summary is an op which returns the Summary object for a Run. Ops are mapped, meaning this op is applied to each Run in the list, resulting in a list of Summary objects.
["cifar10_sample_table"] is a Pick op (denoted with brackets), with a parameter of predictions. Since Summary objects act like dictionaries or maps, this operation picks the predictions field off of each Summary object.
To learn how to write your own queries interactively, see this report.
Configurations
Select the gear icon on the upper left corner of the panel to expand the query configuration. This allows the user to configure the type of panel and the parameters for the result panel.
Result panels
Finally, the query result panel renders the result of the query expression, using the selected query panel, configured by the configuration to display the data in an interactive form. The following images shows a Table and a Plot of the same data.
Basic operations
The following common operations you can make within your query panels.
Sort
Sort from the column options:
Filter
You can either filter directly in the query or using the filter button in the top left corner (second image)
Map
Map operations iterate over lists and apply a function to each element in the data. You can do this directly with a panel query or by inserting a new column from the column options.
Groupby
You can groupby using a query or from the column options.
Concat
The concat operation allows you to concatenate 2 tables and concatenate or join from the panel settings
Join
It is also possible to join tables directly in the query. Consider the following query expression:
(row) => row["Label"] are selectors for each table, determining which column to join on
"Table1" and "Table2" are the names of each table when joined
true and false are for left and right inner/outer join settings
Runs object
Use query panels to access the runs object. Run objects store records of your experiments. You can find more details about it in this section of the report but, as quick overview, runs object has available:
summary: A dictionary of information that summarizes the run’s results. This can be scalars like accuracy and loss, or large files. By default, wandb.log() sets the summary to the final value of a logged time series. You can set the contents of the summary directly. Think of the summary as the run’s outputs.
history: A list of dictionaries meant to store values that change while the model is training such as loss. The command wandb.log() appends to this object.
config: A dictionary of the run’s configuration information, such as the hyperparameters for a training run or the preprocessing methods for a run that creates a dataset Artifact. Think of these as the run’s “inputs”
Access Artifacts
Artifacts are a core concept in W&B. They are a versioned, named collection of files and directories. Use Artifacts to track model weights, datasets, and any other file or directory. Artifacts are stored in W&B and can be downloaded or used in other runs. You can find more details and examples in this section of the report. Artifacts are normally accessed from the project object:
project.artifactVersion(): returns the specific artifact version for a given name and version within a project
project.artifact(""): returns the artifact for a given name within a project. You can then use .versions to get a list of all versions of this artifact
project.artifactType(): returns the artifactType for a given name within a project. You can then use .artifacts to get a list of all artifacts with this type
project.artifactTypes: returns a list of all artifact types under the project
2.5.1.8.1 - Embed objects
W&B’s Embedding Projector allows users to plot multi-dimensional embeddings on a 2D plane using common dimension reduction algorithms like PCA, UMAP, and t-SNE.
Embeddings are used to represent objects (people, images, posts, words, etc…) with a list of numbers - sometimes referred to as a vector. In machine learning and data science use cases, embeddings can be generated using a variety of approaches across a range of applications. This page assumes the reader is familiar with embeddings and is interested in visually analyzing them inside of W&B.
After running the above code, the W&B dashboard will have a new Table containing your data. You can select 2D Projection from the upper right panel selector to plot the embeddings in 2 dimensions. Smart default will be automatically selected, which can be easily overridden in the configuration menu accessed by clicking the gear icon. In this example, we automatically use all 5 available numeric dimensions.
Digits MNIST
While the above example shows the basic mechanics of logging embeddings, typically you are working with many more dimensions and samples. Let’s consider the MNIST Digits dataset (UCI ML hand-written digits datasets) made available via SciKit-Learn. This dataset has 1797 records, each with 64 dimensions. The problem is a 10 class classification use case. We can convert the input data to an image for visualization as well.
After running the above code, again we are presented with a Table in the UI. By selecting 2D Projection we can configure the definition of the embedding, coloring, algorithm (PCA, UMAP, t-SNE), algorithm parameters, and even overlay (in this case we show the image when hovering over a point). In this particular case, these are all “smart defaults” and you should see something very similar with a single click on 2D Projection. (Click here to interact with this example).
Logging Options
You can log embeddings in a number of different formats:
Single Embedding Column: Often your data is already in a “matrix”-like format. In this case, you can create a single embedding column - where the data type of the cell values can be list[int], list[float], or np.ndarray.
Multiple Numeric Columns: In the above two examples, we use this approach and create a column for each dimension. We currently accept python int or float for the cells.
Furthermore, just like all tables, you have many options regarding how to construct the table:
Directly from a dataframe using wandb.Table(dataframe=df)
Directly from a list of data using wandb.Table(data=[...], columns=[...])
Build the table incrementally row-by-row (great if you have a loop in your code). Add rows to your table using table.add_data(...)
Add an embedding column to your table (great if you have a list of predictions in the form of embeddings): table.add_col("col_name", ...)
Add a computed column (great if you have a function or model you want to map over your table): table.add_computed_columns(lambda row, ndx: {"embedding": model.predict(row)})
Plotting Options
After selecting 2D Projection, you can click the gear icon to edit the rendering settings. In addition to selecting the intended columns (see above), you can select an algorithm of interest (along with the desired parameters). Below you can see the parameters for UMAP and t-SNE respectively.
Note: we currently downsample to a random subset of 1000 rows and 50 dimensions for all three algorithms.
2.5.2 - Custom charts
Use Custom Charts to create charts that aren’t possible right now in the default UI. Log arbitrary tables of data and visualize them exactly how you want. Control details of fonts, colors, and tooltips with the power of Vega.
Log data: From your script, log config and summary data as you normally would when running with W&B. To visualize a list of multiple values logged at one specific time, use a customwandb.Table
Customize the chart: Pull in any of this logged data with a GraphQL query. Visualize the results of your query with Vega, a powerful visualization grammar.
Log the chart: Call your own preset from your script with wandb.plot_table().
Log charts from a script
Builtin presets
These presets have builtin wandb.plot methods that make it fast to log charts directly from your script and see the exact visualizations you’re looking for in the UI.
wandb.plot.line()
Log a custom line plot—a list of connected and ordered points (x,y) on arbitrary axes x and y.
data = [[x, y] for (x, y) in zip(x_values, y_values)]
table = wandb.Table(data=data, columns=["x", "y"])
wandb.log(
{
"my_custom_plot_id": wandb.plot.line(
table, "x", "y", title="Custom Y vs X Line Plot" )
}
)
You can use this to log curves on any two dimensions. Note that if you’re plotting two lists of values against each other, the number of values in the lists must match exactly (for example, each point must have an x and a y).
Log a custom scatter plot—a list of points (x, y) on a pair of arbitrary axes x and y.
data = [[x, y] for (x, y) in zip(class_x_prediction_scores, class_y_prediction_scores)]
table = wandb.Table(data=data, columns=["class_x", "class_y"])
wandb.log({"my_custom_id": wandb.plot.scatter(table, "class_x", "class_y")})
You can use this to log scatter points on any two dimensions. Note that if you’re plotting two lists of values against each other, the number of values in the lists must match exactly (for example, each point must have an x and a y).
Log a custom bar chart—a list of labeled values as bars—natively in a few lines:
data = [[label, val] for (label, val) in zip(labels, values)]
table = wandb.Table(data=data, columns=["label", "value"])
wandb.log(
{
"my_bar_chart_id": wandb.plot.bar(
table, "label", "value", title="Custom Bar Chart" )
}
)
You can use this to log arbitrary bar charts. Note that the number of labels and values in the lists must match exactly (for example, each data point must have both).
Log a custom histogram—sort list of values into bins by count/frequency of occurrence—natively in a few lines. Let’s say I have a list of prediction confidence scores (scores) and want to visualize their distribution:
data = [[s] for s in scores]
table = wandb.Table(data=data, columns=["scores"])
wandb.log({"my_histogram": wandb.plot.histogram(table, "scores", title=None)})
You can use this to log arbitrary histograms. Note that data is a list of lists, intended to support a 2D array of rows and columns.
Tweak a builtin preset, or create a new preset, then save the chart. Use the chart ID to log data to that custom preset directly from your script.
# Create a table with the columns to plottable = wandb.Table(data=data, columns=["step", "height"])
# Map from the table's columns to the chart's fieldsfields = {"x": "step", "value": "height"}
# Use the table to populate the new custom chart preset# To use your own saved chart preset, change the vega_spec_namemy_custom_chart = wandb.plot_table(
vega_spec_name="carey/new_chart",
data_table=table,
fields=fields,
)
Here are the data types you can log from your script and use in a custom chart:
Config: Initial settings of your experiment (your independent variables). This includes any named fields you’ve logged as keys to wandb.config at the start of your training. For example: wandb.config.learning_rate = 0.0001
Summary: Single values logged during training (your results or dependent variables). For example, wandb.log({"val_acc" : 0.8}). If you write to this key multiple times during training via wandb.log(), the summary is set to the final value of that key.
History: The full time series of the logged scalar is available to the query via the history field
summaryTable: If you need to log a list of multiple values, use a wandb.Table() to save that data, then query it in your custom panel.
historyTable: If you need to see the history data, then query historyTable in your custom chart panel. Each time you call wandb.Table() or log a custom chart, you’re creating a new table in history for that step.
How to log a custom table
Use wandb.Table() to log your data as a 2D array. Typically each row of this table represents one data point, and each column denotes the relevant fields/dimensions for each data point which you’d like to plot. As you configure a custom panel, the whole table will be accessible via the named key passed to wandb.log()(custom_data_table below), and the individual fields will be accessible via the column names (x, y, and z). You can log tables at multiple time steps throughout your experiment. The maximum size of each table is 10,000 rows.
Add a new custom chart to get started, then edit the query to select data from your visible runs. The query uses GraphQL to fetch data from the config, summary, and history fields in your runs.
Custom visualizations
Select a Chart in the upper right corner to start with a default preset. Next, pick Chart fields to map the data you’re pulling in from the query to the corresponding fields in your chart. Here’s an example of selecting a metric to get from the query, then mapping that into the bar chart fields below.
How to edit Vega
Click Edit at the top of the panel to go into Vega edit mode. Here you can define a Vega specification that creates an interactive chart in the UI. You can change any aspect of the chart. For example, you can change the title, pick a different color scheme, show curves as a series of points instead of as connected lines. You can also make changes to the data itself, such as using a Vega transform to bin an array of values into a histogram. The panel preview will update interactively, so you can see the effect of your changes as you edit the Vega spec or query. Refer to the Vega documentation and tutorials .
Field references
To pull data into your chart from W&B, add template strings of the form "${field:<field-name>}" anywhere in your Vega spec. This will create a dropdown in the Chart Fields area on the right side, which users can use to select a query result column to map into Vega.
To set a default value for a field, use this syntax: "${field:<field-name>:<placeholder text>}"
Saving chart presets
Apply any changes to a specific visualization panel with the button at the bottom of the modal. Alternatively, you can save the Vega spec to use elsewhere in your project. To save the reusable chart definition, click Save as at the top of the Vega editor and give your preset a name.
Sampling: Dynamically adjust the total number of points loaded into the panel for efficiency
Gotchas
Not seeing the data you’re expecting in the query as you’re editing your chart? It might be because the column you’re looking for is not logged in the runs you have selected. Save your chart and go back out to the runs table, and select the runs you’d like to visualize with the eye icon.
Common use cases
Customize bar plots with error bars
Show model validation metrics which require custom x-y coordinates (like precision-recall curves)
Overlay data distributions from two different models/experiments as histograms
Show changes in a metric via snapshots at multiple points during training
Create a unique visualization not yet available in W&B (and hopefully share it with the world)
2.5.2.1 - Tutorial: Use custom charts
Tutorial of using the custom charts feature in the W&B UI
Use custom charts to control the data you’re loading in to a panel and its visualization.
1. Log data to W&B
First, log data in your script. Use wandb.config for single points set at the beginning of training, like hyperparameters. Use wandb.log() for multiple points over time, and log custom 2D arrays with wandb.Table(). We recommend logging up to 10,000 data points per logged key.
Try a quick example notebook to log the data tables, and in the next step we’ll set up custom charts. See what the resulting charts look like in the live report.
2. Create a query
Once you’ve logged data to visualize, go to your project page and click the + button to add a new panel, then select Custom Chart. You can follow along in this workspace.
Add a query
Click summary and select historyTable to set up a new query pulling data from the run history.
Type in the key where you logged the wandb.Table(). In the code snippet above, it was my_custom_table . In the example notebook, the keys are pr_curve and roc_curve.
Set Vega fields
Now that the query is loading in these columns, they’re available as options to select in the Vega fields dropdown menus:
x-axis: runSets_historyTable_r (recall)
y-axis: runSets_historyTable_p (precision)
color: runSets_historyTable_c (class label)
3. Customize the chart
Now that looks pretty good, but I’d like to switch from a scatter plot to a line plot. Click Edit to change the Vega spec for this built in chart. Follow along in this workspace.
I updated the Vega spec to customize the visualization:
add titles for the plot, legend, x-axis, and y-axis (set “title” for each field)
change the value of “mark” from “point” to “line”
remove the unused “size” field
To save this as a preset that you can use elsewhere in this project, click Save as at the top of the page. Here’s what the result looks like, along with an ROC curve:
Bonus: Composite Histograms
Histograms can visualize numerical distributions to help us understand larger datasets. Composite histograms show multiple distributions across the same bins, letting us compare two or more metrics across different models or across different classes within our model. For a semantic segmentation model detecting objects in driving scenes, we might compare the effectiveness of optimizing for accuracy versus intersection over union (IOU), or we might want to know how well different models detect cars (large, common regions in the data) versus traffic signs (much smaller, less common regions). In the demo Colab, you can compare the confidence scores for two of the ten classes of living things.
To create your own version of the custom composite histogram panel:
Create a new Custom Chart panel in your Workspace or Report (by adding a “Custom Chart” visualization). Hit the “Edit” button in the top right to modify the Vega spec starting from any built-in panel type.
Replace that built-in Vega spec with my MVP code for a composite histogram in Vega. You can modify the main title, axis titles, input domain, and any other details directly in this Vega spec using Vega syntax (you could change the colors or even add a third histogram :)
Modify the query in the right hand side to load the correct data from your wandb logs. Add the field summaryTable and set the corresponding tableKey to class_scores to fetch the wandb.Table logged by your run. This will let you populate the two histogram bin sets (red_bins and blue_bins) via the dropdown menus with the columns of the wandb.Table logged as class_scores. For my example, I chose the animal class prediction scores for the red bins and plant for the blue bins.
You can keep making changes to the Vega spec and query until you’re happy with the plot you see in the preview rendering. Once you’re done, click Save as in the top and give your custom plot a name so you can reuse it. Then click Apply from panel library to finish your plot.
Here’s what my results look like from a very brief experiment: training on only 1000 examples for one epoch yields a model that’s very confident that most images are not plants and very uncertain about which images might be animals.
2.5.3 - Manage workspace, section, and panel settings
Within a given workspace page there are three different setting levels: workspaces, sections, and panels. Workspace settings apply to the entire workspace. Section settings apply to all panels within a section. Panel settings apply to individual panels.
Workspace settings
Workspace settings apply to all sections and all panels within those sections. You can edit two types of workspace settings: Workspace layout and Line plots. Workspace layouts determine the structure of the workspace, while Line plots settings control the default settings for line plots in the workspace.
To edit settings that apply to the overall structure of this workspace:
Navigate to your project workspace.
Click the gear icon next to the New report button to view the workspace settings.
Choose Workspace layout to change the workspace’s layout, or choose Line plots to configure default settings for line plots in the workspace.
Workspace layout options
Configure a workspaces layout to define the overall structure of the workspace. This includes sectioning logic and panel organization.
The workspace layout options page shows whether the workspace generates panels automatically or manually. To adjust a workspace’s panel generation mode, refer to Panels.
This table describes each workspace layout option.
Workspace setting
Description
Hide empty sections during search
Hide sections that do not contain any panels when searching for a panel.
Sort panels alphabetically
Sort panels in your workspaces alphabetically.
Section organization
Remove all existing sections and panels and repopulate them with new section names. Groups the newly populated sections either by first or last prefix.
W&B suggests that you organize sections by grouping the first prefix rather than grouping by the last prefix. Grouping by the first prefix can result in fewer sections and better performance.
Line plots options
Set global defaults and custom rules for line plots in a workspace by modifying the Line plots workspace settings.
You can edit two main settings within Line plots settings: Data and Display preferences. The Data tab contains the following settings:
Line plot setting
Description
X axis
The scale of the x-axis in line plots. The x-axis is set to Step by default. See the proceeding table for the list of x-axis options.
Range
Minimum and maximum settings to display for x axis.
Smoothing
Change the smoothing on the line plot. For more information about smoothing, see Smooth line plots.
Outliers
Rescale to exclude outliers from the default plot min and max scale.
Point aggregation method
Improve data visualization accuracy and performance. See Point aggregation for more information.
Max number of runs or groups
Limit the number of runs or groups displayed on the line plot.
In addition to Step, there are other options for the x-axis:
X axis option
Description
Relative Time (Wall)
Timestamp since the process starts. For example, suppose start a run and resume that run the next day. If you then log something, the recorded point is 24 hours.
Relative Time (Process)
Timestamp inside the running process. For example, suppose you start a run and let it continue for 10 seconds. The next day you resume that run. The point is recorded as 10 seconds.
Wall Time
Minutes elapsed since the start of the first run on the graph.
Step
Increments each time you call wandb.log().
For information on how to edit an individual line plot, see Edit line panel settings in Line plots.
Within the Display preferences tab, you can toggle the proceeding settings:
Display preference
Description
Remove legends from all panels
Remove the panel’s legend
Display colored run names in tooltips
Show the runs as colored text within the tooltip
Only show highlighted run in companion chart tooltip
Display only highlighted runs in chart tooltip
Number of runs shown in tooltips
Display the number of runs in the tooltip
Display full run names on the primary chart tooltip
Display the full name of the run in the chart tooltip
Section settings
Section settings apply to all panels within that section. Within a workspace section you can sort panels, rearrange panels, and rename the section name.
Modify section settings by selecting the three horizontal dots (…) in the upper right corner of a section.
From the dropdown, you can edit the following settings that apply to the entire section:
Section setting
Description
Rename a section
Rename the name of the section
Sort panels A-Z
Sort panels within a section alphabetically
Rearrange panels
Select and drag a panel within a section to manually order your panels
The proceeding animation demonstrates how to rearrange panels within a section:
In addition to the settings described in the preceding table, you can also edit how sections appear in your workspaces such as Add section below, Add section above, Delete section, and Add section to report.
Panel settings
Customize an individual panel’s settings to compare multiple lines on the same plot, calculate custom axes, rename labels, and more. To edit a panel’s settings:
Hover your mouse over the panel you want to edit.
Select the pencil icon that appears.
Within the modal that appears, you can edit settings related to the panel’s data, display preferences, and more.
Use the Weights and Biases Settings Page to customize your individual user profile or team settings.
Within your individual user account you can edit: your profile picture, display name, geography location, biography information, emails associated to your account, and manage alerts for runs. You can also use the settings page to link your GitHub repository and delete your account. For more information, see User settings.
Use the team settings page to invite or remove new members to a team, manage alerts for team runs, change privacy settings, and view and manage storage usage. For more information about team settings, see Team settings.
2.5.4.1 - Manage user settings
Manage your profile information, account defaults, alerts, participation in beta products, GitHub integration, storage usage, account activation, and create teams in your user settings.
Navigate to your user profile page and select your user icon on the top right corner. From the dropdown, choose Settings.
Profile
Within the Profile section you can manage and modify your account name and institution. You can optionally add a biography, location, link to a personal or your institution’s website, and upload a profile image.
Teams
Create a new team in the Team section. To create a new team, select the New team button and provide the following:
Team name - the name of your team. The team mane must be unique. Team names can not be changed.
Team type - Select either the Work or Academic button.
Company/Organization - Provide the name of the team’s company or organization. Choose the dropdown menu to select a company or organization. You can optionally provide a new organization.
Only administrative accounts can create a team.
Beta features
Within the Beta Features section you can optionally enable fun add-ons and sneak previews of new products in development. Select the toggle switch next to the beta feature you want to enable.
Alerts
Get notified when your runs crash, finish, or set custom alerts with wandb.alert(). Receive notifications either through Email or Slack. Toggle the switch next to the event type you want to receive alerts from.
Runs finished: whether a Weights and Biases run successfully finished.
Run crashed: notification if a run has failed to finish.
Connect a personal Github account. To connect a Github account:
Select the Connect Github button. This will redirect you to an open authorization (OAuth) page.
Select the organization to grant access in the Organization access section.
Select Authorizewandb.
Delete your account
Select the Delete Account button to delete your account.
Account deletion can not be reversed.
Storage
The Storage section describes the total memory usage the your account has consumed on the Weights and Biases servers. The default storage plan is 100GB. For more information about storage and pricing, see the Pricing page.
2.5.4.2 - Manage team settings
Manage a team’s members, avatar, alerts, and privacy settings with the Team Settings page.
Team settings
Change your team’s settings, including members, avatar, alerts, privacy, and usage. Only team administrators can view and edit a team’s settings.
Only Administration account types can change team settings or remove a member from a team.
Members
The Members section shows a list of all pending invitations and the members that have either accepted the invitation to join the team. Each member listed displays a member’s name, username, email, team role, as well as their access privileges to Models and Weave, which is inherited by from the Organization. There are three standard team roles: Administrator (Admin), Member, and View-only.
See Add and Manage teams for information on how to create a tea, invite users to a team, remove users from a team, and change a user’s role.
Avatar
Set an avatar by navigating to the Avatar section and uploading an image.
Select the Update Avatar to prompt a file dialog to appear.
From the file dialog, choose the image you want to use.
Alerts
Notify your team when runs crash, finish, or set custom alerts. Your team can receive alerts either through email or Slack.
Toggle the switch next to the event type you want to receive alerts from. Weights and Biases provides the following event type options be default:
Runs finished: whether a Weights and Biases run successfully finished.
Navigate to the Privacy section to change privacy settings. Only members with Administrative roles can modify privacy settings. Administrator roles can:
Force projects in the team to be private.
Enable code saving by default.
Usage
The Usage section describes the total memory usage the team has consumed on the Weights and Biases servers. The default storage plan is 100GB. For more information about storage and pricing, see the Pricing page.
Storage
The Storage section describes the cloud storage bucket configuration that is being used for the team’s data. For more information, see Secure Storage Connector or check out our W&B Server docs if you are self-hosting.
2.5.4.3 - Manage email settings
Manage emails from the Settings page.
Add, delete, manage email types and primary email addresses in your W&B Profile Settings page. Select your profile icon in the upper right corner of the W&B dashboard. From the dropdown, select Settings. Within the Settings page, scroll down to the Emails dashboard:
Manage primary email
The primary email is marked with a 😎 emoji. The primary email is automatically defined with the email you provided when you created a W&B account.
Select the kebab dropdown to change the primary email associated with your Weights And Biases account:
Only verified emails can be set as primary
Add emails
Select + Add Email to add an email. This will take you to an Auth0 page. You can enter in the credentials for the new email or connect using single sign-on (SSO).
Delete emails
Select the kebab dropdown and choose Delete Emails to delete an email that is registered to your W&B account
Primary emails cannot be deleted. You need to set a different email as a primary email before deleting.
Log in methods
The Log in Methods column displays the log in methods that are associated with your account.
A verification email is sent to your email account when you create a W&B account. Your email account is considered unverified until you verify your email address. Unverified emails are displayed in red.
Attempt to log in with your email address again to retrieve a second verification email if you no longer have the original verification email that was sent to your email account.
Note: Only the admin of an organization can create a new team.
Create a team profile
You can customize your team’s profile page to show an introduction and showcase reports and projects that are visible to the public or team members. Present reports, projects, and external links.
Highlight your best research to visitors by showcasing your best public reports
Showcase the most active projects to make it easier for teammates to find them
Find collaborators by adding external links to your company or research lab’s website and any papers you’ve published
Remove team members
Team admins can open the team settings page and click the delete button next to the departing member’s name. Any runs logged to the team remain after a user leaves.
Manage team roles and permissions
Select a team role when you invite colleagues to join a team. There are following team role options:
Admin: Team admins can add and remove other admins or team members. They have permissions to modify all projects and full deletion permissions. This includes, but is not limited to, deleting runs, projects, artifacts, and sweeps.
Member: A regular member of the team. An admin invites a team member by email. A team member cannot invite other members. Team members can only delete runs and sweep runs created by that member. Suppose you have two members A and B. Member B moves a Run from team B’s project to a different project owned by Member A. Member A can not delete the Run Member B moved to Member A’s project. Only the member that creates the Run, or the team admin, can delete the run.
View-Only (Enterprise-only feature): View-Only members can view assets within the team such as runs, reports, and workspaces. They can follow and comment on reports, but they can not create, edit, or delete project overview, reports, or runs.
Custom roles (Enterprise-only feature): Custom roles allow organization admins to compose new roles based on either of the View-Only or Member roles, together with additional permissions to achieve fine-grained access control. Team admins can then assign any of those custom roles to users in their respective teams. Refer to Introducing Custom Roles for W&B Teams for details.
W&B recommends to have more than one admin in a team. It is a best practice to ensure that admin operations can continue when the primary admin is not available.
Team settings
Team settings allow you to manage the settings for your team and its members. With these privileges, you can effectively oversee and organize your team within W&B.
Permissions
View-Only
Team Member
Team Admin
Add team members
X
Remove team members
X
Manage team settings
X
Model Registry
The proceeding table lists permissions that apply to all projects across a given team.
Permissions
View-Only
Team Member
Model Registry Admin
Team Admin
Add aliases
X
X
X
Add models to the registry
X
X
X
View models in the registry
X
X
X
X
Download models
X
X
X
X
Add/Remove Registry Admins
X
X
Add/Remove Protected Aliases
X
See the Model Registry chapter for more information about protected aliases.
Reports
Report permissions grant access to create, view, and edit reports. The proceeding table lists permissions that apply to all reports across a given team.
Permissions
View-Only
Team Member
Team Admin
View reports
X
X
X
Create reports
X
X
Edit reports
X (team members can only edit their own reports)
X
Delete reports
X (team members can only edit their own reports)
X
Experiments
The proceeding table lists permissions that apply to all experiments across a given team.
Permissions
View-Only
Team Member
Team Admin
View experiment metadata (includes history metrics, system metrics, files, and logs)
X
X
X
Edit experiment panels and workspaces
X
X
Log experiments
X
X
Delete experiments
X (team members can only delete experiments they created)
X
Stop experiments
X (team members can only stop experiments they created)
X
Artifacts
The proceeding table lists permissions that apply to all artifacts across a given team.
Permissions
View-Only
Team Member
Team Admin
View artifacts
X
X
X
Create artifacts
X
X
Delete artifacts
X
X
Edit metadata
X
X
Edit aliases
X
X
Delete aliases
X
X
Download artifact
X
X
System settings (W&B Server only)
Use system permissions to create and manage teams and their members and to adjust system settings. These privileges enable you to effectively administer and maintain the W&B instance.
Permissions
View-Only
Team Member
Team Admin
System Admin
Configure system settings
X
Create/delete teams
X
Team service account behavior
When you configure a team in your training environment, you can use a service account from that team to log runs in either of private or public projects within that team. Additionally, you can attribute those runs to a user if WANDB_USERNAME or WANDB_USER_EMAIL variable exists in your environment and the referenced user is part of that team.
When you do not configure a team in your training environment and use a service account, the runs log to the named project within that service account’s parent team. In this case as well, you can attribute the runs to a user if WANDB_USERNAME or WANDB_USER_EMAIL variable exists in your environment and the referenced user is part of the service account’s parent team.
A service account can not log runs to a private project in a team different from its parent team. A service account can log to runs to project only if the project is set to Open project visibility.
Add social badges to your intro
In your Intro, type / and choose Markdown and paste the markdown snippet that renders your badge. Once you convert it to WYSIWYG, you can resize it.
For example, to add a Twitter follow badge, add [](https://twitter.com/intent/follow?screen_name=weights_biases replacing weights_biases with your Twitter username.
Team trials
See the pricing page for more information on W&B plans. You can download all your data at any time, either using the dashboard UI or the Export API.
Privacy settings
You can see the privacy settings of all team projects on the team settings page:
app.wandb.ai/teams/your-team-name
Advanced configuration
Secure storage connector
The team-level secure storage connector allows teams to use their own cloud storage bucket with W&B. This provides greater data access control and data isolation for teams with highly sensitive data or strict compliance requirements. Refer to Secure Storage Connector for more information.
2.5.4.5 - Manage storage
Ways to manage W&B data storage.
If you are approaching or exceeding your storage limit, there are multiple paths forward to manage your data. The path that’s best for you will depend on your account type and your current project setup.
Manage storage consumption
W&B offers different methods of optimizing your storage consumption:
Use reference artifacts to track files saved outside the W&B system, instead of uploading them to W&B storage.
You can also choose to delete data to remain under your storage limit. There are several ways to do this:
Delete data interactively with the app UI.
Set a TTL policy on Artifacts so they are automatically deleted.
2.5.4.6 - System metrics
Metrics automatically logged by wandb
This page provides detailed information about the system metrics that are tracked by the W&B SDK.
wandb automatically logs system metrics every 10 seconds.
CPU
Process CPU Percent (CPU)
Percentage of CPU usage by the process, normalized by the number of available CPUs.
W&B assigns a cpu tag to this metric.
CPU Percent
CPU usage of the system on a per-core basis.
W&B assigns a cpu.{i}.cpu_percent tag to this metric.
Process CPU Threads
The number of threads utilized by the process.
W&B assigns a proc.cpu.threads tag to this metric.
Disk
By default, the usage metrics are collected for the / path. To configure the paths to be monitored, use the following setting:
run = wandb.init(
settings=wandb.Settings(
_stats_disk_paths=("/System/Volumes/Data", "/home", "/mnt/data"),
),
)
Disk Usage Percent
Represents the total system disk usage in percentage for specified paths.
W&B assigns a disk.{path}.usagePercen tag to this metric.
Disk Usage
Represents the total system disk usage in gigabytes (GB) for specified paths.
The paths that are accessible are sampled, and the disk usage (in GB) for each path is appended to the samples.
W&B assigns a disk.{path}.usageGB) tag to this metric.
Disk In
Indicates the total system disk read in megabytes (MB).
The initial disk read bytes are recorded when the first sample is taken. Subsequent samples calculate the difference between the current read bytes and the initial value.
W&B assigns a disk.in tag to this metric.
Disk Out
Represents the total system disk write in megabytes (MB).
Similar to Disk In, the initial disk write bytes are recorded when the first sample is taken. Subsequent samples calculate the difference between the current write bytes and the initial value.
W&B assigns a disk.out tag to this metric.
Memory
Process Memory RSS
Represents the Memory Resident Set Size (RSS) in megabytes (MB) for the process. RSS is the portion of memory occupied by a process that is held in main memory (RAM).
W&B assigns a proc.memory.rssMB tag to this metric.
Process Memory Percent
Indicates the memory usage of the process as a percentage of the total available memory.
W&B assigns a proc.memory.percent tag to this metric.
Memory Percent
Represents the total system memory usage as a percentage of the total available memory.
W&B assigns a memory tag to this metric.
Memory Available
Indicates the total available system memory in megabytes (MB).
W&B assigns a proc.memory.availableMB tag to this metric.
Network
Network Sent
Represents the total bytes sent over the network.
The initial bytes sent are recorded when the metric is first initialized. Subsequent samples calculate the difference between the current bytes sent and the initial value.
W&B assigns a network.sent tag to this metric.
Network Received
Indicates the total bytes received over the network.
Similar to Network Sent, the initial bytes received are recorded when the metric is first initialized. Subsequent samples calculate the difference between the current bytes received and the initial value.
W&B assigns a network.recv tag to this metric.
NVIDIA GPU
In addition to the metrics described below, if the process and/or its children use a particular GPU, W&B captures the corresponding metrics as gpu.process.{gpu_index}...
GPU Memory Utilization
Represents the GPU memory utilization in percent for each GPU.
W&B assigns a gpu.{gpu_index}.memory tag to this metric.
GPU Memory Allocated
Indicates the GPU memory allocated as a percentage of the total available memory for each GPU.
W&B assigns a gpu.{gpu_index}.memoryAllocated tag to this metric.
GPU Memory Allocated Bytes
Specifies the GPU memory allocated in bytes for each GPU.
W&B assigns a gpu.{gpu_index}.memoryAllocatedBytes tag to this metric.
GPU Utilization
Reflects the GPU utilization in percent for each GPU.
W&B assigns a gpu.{gpu_index}.gpu tag to this metric.
GPU Temperature
The GPU temperature in Celsius for each GPU.
W&B assigns a gpu.{gpu_index}.temp tag to this metric.
GPU Power Usage Watts
Indicates the GPU power usage in Watts for each GPU.
W&B assigns a gpu.{gpu_index}.powerWatts tag to this metric.
GPU Power Usage Percent
Reflects the GPU power usage as a percentage of its power capacity for each GPU.
W&B assigns a gpu.{gpu_index}.powerPercent tag to this metric.
GPU SM Clock Speed
Represents the clock speed of the Streaming Multiprocessor (SM) on the GPU in MHz. This metric is indicative of the processing speed within the GPU cores responsible for computation tasks.
W&B assigns a gpu.{gpu_index}.smClock tag to this metric.
GPU Memory Clock Speed
Represents the clock speed of the GPU memory in MHz, which influences the rate of data transfer between the GPU memory and processing cores.
W&B assigns a gpu.{gpu_index}.memoryClock tag to this metric.
GPU Graphics Clock Speed
Represents the base clock speed for graphics rendering operations on the GPU, expressed in MHz. This metric often reflects performance during visualization or rendering tasks.
W&B assigns a gpu.{gpu_index}.graphicsClock tag to this metric.
GPU Corrected Memory Errors
Tracks the count of memory errors on the GPU that W&B automatically corrects by error-checking protocols, indicating recoverable hardware issues.
W&B assigns a gpu.{gpu_index}.correctedMemoryErrors tag to this metric.
GPU Uncorrected Memory Errors
Tracks the count of memory errors on the GPU that W&B uncorrected, indicating non-recoverable errors which can impact processing reliability.
W&B assigns a gpu.{gpu_index}.unCorrectedMemoryErrors tag to this metric.
GPU Encoder Utilization
Represents the percentage utilization of the GPU’s video encoder, indicating its load when encoding tasks (for example, video rendering) are running.
W&B assigns a gpu.{gpu_index}.encoderUtilization tag to this metric.
AMD GPU
W&B extracts metrics from the output of the rocm-smi tool supplied by AMD (rocm-smi -a --json).
AMD GPU Utilization
Represents the GPU utilization in percent for each AMD GPU device.
W&B assigns a gpu.{gpu_index}.gpu tag to this metric.
AMD GPU Memory Allocated
Indicates the GPU memory allocated as a percentage of the total available memory for each AMD GPU device.
W&B assigns a gpu.{gpu_index}.memoryAllocated tag to this metric.
AMD GPU Temperature
The GPU temperature in Celsius for each AMD GPU device.
W&B assigns a gpu.{gpu_index}.temp tag to this metric.
AMD GPU Power Usage Watts
The GPU power usage in Watts for each AMD GPU device.
W&B assigns a gpu.{gpu_index}.powerWatts tag to this metric.
AMD GPU Power Usage Percent
Reflects the GPU power usage as a percentage of its power capacity for each AMD GPU device.
W&B assigns a gpu.{gpu_index}.powerPercent to this metric.
Apple ARM Mac GPU
Apple GPU Utilization
Indicates the GPU utilization in percent for Apple GPU devices, specifically on ARM Macs.
W&B assigns a gpu.0.gpu tag to this metric.
Apple GPU Memory Allocated
The GPU memory allocated as a percentage of the total available memory for Apple GPU devices on ARM Macs.
W&B assigns a gpu.0.memoryAllocated tag to this metric.
Apple GPU Temperature
The GPU temperature in Celsius for Apple GPU devices on ARM Macs.
W&B assigns a gpu.0.temp tag to this metric.
Apple GPU Power Usage Watts
The GPU power usage in Watts for Apple GPU devices on ARM Macs.
W&B assigns a gpu.0.powerWatts tag to this metric.
Apple GPU Power Usage Percent
The GPU power usage as a percentage of its power capacity for Apple GPU devices on ARM Macs.
W&B assigns a gpu.0.powerPercent tag to this metric.
Graphcore IPU
Graphcore IPUs (Intelligence Processing Units) are unique hardware accelerators designed specifically for machine intelligence tasks.
IPU Device Metrics
These metrics represent various statistics for a specific IPU device. Each metric has a device ID (device_id) and a metric key (metric_key) to identify it. W&B assigns a ipu.{device_id}.{metric_key} tag to this metric.
Metrics are extracted using the proprietary gcipuinfo library, which interacts with Graphcore’s gcipuinfo binary. The sample method fetches these metrics for each IPU device associated with the process ID (pid). Only the metrics that change over time, or the first time a device’s metrics are fetched, are logged to avoid logging redundant data.
For each metric, the method parse_metric is used to extract the metric’s value from its raw string representation. The metrics are then aggregated across multiple samples using the aggregate method.
The following lists available metrics and their units:
Average Board Temperature (average board temp (C)): Temperature of the IPU board in Celsius.
Average Die Temperature (average die temp (C)): Temperature of the IPU die in Celsius.
Clock Speed (clock (MHz)): The clock speed of the IPU in MHz.
IPU Power (ipu power (W)): Power consumption of the IPU in Watts.
IPU Utilization (ipu utilisation (%)): Percentage of IPU utilization.
IPU Session Utilization (ipu utilisation (session) (%)): IPU utilization percentage specific to the current session.
Data Link Speed (speed (GT/s)): Speed of data transmission in Giga-transfers per second.
Google Cloud TPU
Tensor Processing Units (TPUs) are Google’s custom-developed ASICs (Application Specific Integrated Circuits) used to accelerate machine learning workloads.
TPU Memory usage
The current High Bandwidth Memory usage in bytes per TPU core.
W&B assigns a tpu.{tpu_index}.memoryUsageBytes tag to this metric.
TPU Memory usage percentage
The current High Bandwidth Memory usage in percent per TPU core.
W&B assigns a tpu.{tpu_index}.memoryUsageBytes tag to this metric.
TPU Duty cycle
TensorCore duty cycle percentage per TPU device. Tracks the percentage of time over the sample period during which the accelerator TensorCore was actively processing. A larger value means better TensorCore utilization.
W&B assigns a tpu.{tpu_index}.dutyCycle tag to this metric.
AWS Trainium
AWS Trainium is a specialized hardware platform offered by AWS that focuses on accelerating machine learning workloads. The neuron-monitor tool from AWS is used to capture the AWS Trainium metrics.
Trainium Neuron Core Utilization
The utilization percentage of each NeuronCore, reported on a per-core basis.
W&B assigns a trn.{core_index}.neuroncore_utilization tag to this metric.
Trainium Host Memory Usage, Total
The total memory consumption on the host in bytes.
W&B assigns a trn.host_total_memory_usage tag to this metric.
Trainium Neuron Device Total Memory Usage
The total memory usage on the Neuron device in bytes.
W&B assigns a trn.neuron_device_total_memory_usage) tag to this metric.
Trainium Host Memory Usage Breakdown:
The following is a breakdown of memory usage on the host:
Application Memory (trn.host_total_memory_usage.application_memory): Memory used by the application.
Constants (trn.host_total_memory_usage.constants): Memory used for constants.
DMA Buffers (trn.host_total_memory_usage.dma_buffers): Memory used for Direct Memory Access buffers.
Tensors (trn.host_total_memory_usage.tensors): Memory used for tensors.
Trainium Neuron Core Memory Usage Breakdown
Detailed memory usage information for each NeuronCore:
Capture and log metrics from external endpoints that expose OpenMetrics / Prometheus-compatible data with support for custom regex-based metric filters to be applied to the consumed endpoints.
Refer to this report for a detailed example of how to use this feature in a particular case of monitoring GPU cluster performance with the NVIDIA DCGM-Exporter.
2.5.4.7 - Anonymous mode
Log and visualize data without a W&B account
Are you publishing code that you want anyone to be able to run easily? Use anonymous mode to let someone run your code, see a W&B dashboard, and visualize results without needing to create a W&B account first.
Allow results to be logged in anonymous mode with:
import wandb
wandb.init(anonymous="allow")
For example, the proceeding code snippet shows how to create and log an artifact with W&B:
Weave is a lightweight toolkit for tracking and evaluating LLM applications. Use W&B Weave to visualize and inspect the execution flow of your LLMs, analyze the inputs and outputs of your LLMs, view the intermediate results and securely store and manage your prompts and LLM chain configurations.
With W&B Weave, you can:
Log and debug language model inputs, outputs, and traces
Build rigorous, apples-to-apples evaluations for language model use cases
Organize all the information generated across the LLM workflow, from experimentation to evaluations to production
Document and share insights across the entire organization by generating live reports in digestible, visual formats that are easily understood by non-technical stakeholders.
Use W&B Artifacts to track and version data as the inputs and outputs of your W&B Runs. For example, a model training run might take in a dataset as input and produce a trained model as output. You can log hyperparameters, metadatra, and metrics to a run, and you can use an artifact to log, track, and version the dataset used to train the model as input and another artifact for the resulting model checkpoints as output.
Use cases
You can use artifacts throughout your entire ML workflow as inputs and outputs of runs. You can use datasets, models, or even other artifacts as inputs for processing.
Add one or more files, such as a model file or dataset, to your artifact object.
Log your artifact to W&B.
For example, the proceeding code snippet shows how to log a file called dataset.h5 to an artifact called example_artifact:
import wandb
run = wandb.init(project ="artifacts-example", job_type ="add-dataset")
artifact = wandb.Artifact(name ="example_artifact", type ="dataset")
artifact.add_file(local_path ="./dataset.h5", name ="training_dataset")
artifact.save()
# Logs the artifact version "my_data" as a dataset with data from dataset.h5
See the track external files page for information on how to add references to files or directories stored in external object storage, like an Amazon S3 bucket.
Download an artifact
Indicate the artifact you want to mark as input to your run with the use_artifact method.
Following the preceding code snippet, this next code block shows how to use the training_dataset artifact:
artifact = run.use_artifact("training_dataset:latest") #returns a run object using the "my_data" artifact
This returns an artifact object.
Next, use the returned object to download all contents of the artifact:
datadir = artifact.download() #downloads the full "my_data" artifact to the default directory.
You can pass a custom path into the rootparameter to download an artifact to a specific directory. For alternate ways to download artifacts and to see additional parameters, see the guide on downloading and using artifacts.
1. Create an artifact Python object with wandb.Artifact()
Initialize the wandb.Artifact() class to create an artifact object. Specify the following parameters:
Name: Specify a name for your artifact. The name should be unique, descriptive, and easy to remember. Use an artifacts name to both: identify the artifact in the W&B App UI and when you want to use that artifact.
Type: Provide a type. The type should be simple, descriptive and correspond to a single step of your machine learning pipeline. Common artifact types include 'dataset' or 'model'.
The “name” and “type” you provide is used to create a directed acyclic graph. This means you can view the lineage of an artifact on the W&B App.
Artifacts can not have the same name, even if you specify a different type for the types parameter. In other words, you can not create an artifact named cats of type dataset and another artifact with the same name of type model.
You can optionally provide a description and metadata when you initialize an artifact object. For more information on available attributes and parameters, see wandb.Artifact Class definition in the Python SDK Reference Guide.
The proceeding example demonstrates how to create a dataset artifact:
Replace the string arguments in the preceding code snippet with your own name and type.
2. Add one more files to the artifact
Add files, directories, external URI references (such as Amazon S3) and more with artifact methods. For example, to add a single text file, use the add_file method:
You can also add multiple files with the add_dir method. For more information on how to add files, see Update an artifact.
3. Save your artifact to the W&B server
Finally, save your artifact to the W&B server. Artifacts are associated with a run. Therefore, use a run objects log_artifact() method to save the artifact.
You can optionally construct an artifact outside of a W&B run. For more information, see Track external files.
Calls to log_artifact are performed asynchronously for performant uploads. This can cause surprising behavior when logging artifacts in a loop. For example:
for i in range(10):
a = wandb.Artifact(
"race",
type="dataset",
metadata={
"index": i,
},
)
# ... add files to artifact a ... run.log_artifact(a)
The artifact version v0 is NOT guaranteed to have an index of 0 in its metadata, as the artifacts may be logged in an arbitrary order.
Add files to an artifact
The following sections demonstrate how to construct artifacts with different file types and from parallel runs.
For the following examples, assume you have a project directory with multiple files and a directory structure:
The proceeding code snippet demonstrates how to add an entire, local directory to your artifact:
# Recursively add a directoryartifact.add_dir(local_path="path/file.format", name="optional-prefix")
The proceeding API calls produce the proceeding artifact content:
API Call
Resulting artifact
artifact.add_dir('images')
cat.png
dog.png
artifact.add_dir('images', name='images')
images/cat.png
images/dog.png
artifact.new_file('hello.txt')
hello.txt
Add a URI reference
Artifacts track checksums and other information for reproducibility if the URI has a scheme that W&B library knows how to handle.
Add an external URI reference to an artifact with the add_reference method. Replace the 'uri' string with your own URI. Optionally pass the desired path within the artifact for the name parameter.
# Add a URI referenceartifact.add_reference(uri="uri", name="optional-name")
Artifacts currently support the following URI schemes:
http(s)://: A path to a file accessible over HTTP. The artifact will track checksums in the form of etags and size metadata if the HTTP server supports the ETag and Content-Length response headers.
s3://: A path to an object or object prefix in S3. The artifact will track checksums and versioning information (if the bucket has object versioning enabled) for the referenced objects. Object prefixes are expanded to include the objects under the prefix, up to a maximum of 10,000 objects.
gs://: A path to an object or object prefix in GCS. The artifact will track checksums and versioning information (if the bucket has object versioning enabled) for the referenced objects. Object prefixes are expanded to include the objects under the prefix, up to a maximum of 10,000 objects.
The proceeding API calls will produce the proceeding artifacts:
For large datasets or distributed training, multiple parallel runs might need to contribute to a single artifact.
import wandb
import time
# We will use ray to launch our runs in parallel# for demonstration purposes. You can orchestrate# your parallel runs however you want.import ray
ray.init()
artifact_type ="dataset"artifact_name ="parallel-artifact"table_name ="distributed_table"parts_path ="parts"num_parallel =5# Each batch of parallel writers should have its own# unique group name.group_name ="writer-group-{}".format(round(time.time()))
@ray.remotedeftrain(i):
"""
Our writer job. Each writer will add one image to the artifact.
"""with wandb.init(group=group_name) as run:
artifact = wandb.Artifact(name=artifact_name, type=artifact_type)
# Add data to a wandb table. In this case we use example data table = wandb.Table(columns=["a", "b", "c"], data=[[i, i *2, 2**i]])
# Add the table to folder in the artifact artifact.add(table, "{}/table_{}".format(parts_path, i))
# Upserting the artifact creates or appends data to the artifact run.upsert_artifact(artifact)
# Launch your runs in parallelresult_ids = [train.remote(i) for i in range(num_parallel)]
# Join on all the writers to make sure their files have# been added before finishing the artifact.ray.get(result_ids)
# Once all the writers are finished, finish the artifact# to mark it ready.with wandb.init(group=group_name) as run:
artifact = wandb.Artifact(artifact_name, type=artifact_type)
# Create a "PartitionTable" pointing to the folder of tables# and add it to the artifact. artifact.add(wandb.data_types.PartitionedTable(parts_path), table_name)
# Finish artifact finalizes the artifact, disallowing future "upserts"# to this version. run.finish_artifact(artifact)
4.1.2 - Download and use artifacts
Download and use Artifacts from multiple projects.
Download and use an artifact that is already stored on the W&B server or construct an artifact object and pass it in to for de-duplication as necessary.
Team members with view-only seats cannot download artifacts.
Download and use an artifact stored on W&B
Download and use an artifact stored in W&B either inside or outside of a W&B Run. Use the Public API (wandb.Api) to export (or update data) already saved in W&B. For more information, see the W&B Public API Reference guide.
First, import the W&B Python SDK. Next, create a W&B Run:
import wandb
run = wandb.init(project="<example>", job_type="<job-type>")
Indicate the artifact you want to use with the use_artifact method. This returns a run object. In the proceeding code snippet specifies an artifact called 'bike-dataset' with the alias 'latest':
Use the object returned to download all the contents of the artifact:
datadir = artifact.download()
You can optionally pass a path to the root parameter to download the contents of the artifact to a specific directory. For more information, see the Python SDK Reference Guide.
Use the get_path method to download only subset of files:
path = artifact.get_path(name)
This fetches only the file at the path name. It returns an Entry object with the following methods:
Entry.download: Downloads file from the artifact at path name
Entry.ref: If add_reference stored the entry as a reference, returns the URI
References that have schemes that W&B knows how to handle get downloaded just like artifact files. For more information, see Track external files.
First, import the W&B SDK. Next, create an artifact from the Public API Class. Provide the entity, project, artifact, and alias associated with that artifact:
import wandb
api = wandb.Api()
artifact = api.artifact("entity/project/artifact:alias")
Use the object returned to download the contents of the artifact:
artifact.download()
You can optionally pass a path the root parameter to download the contents of the artifact to a specific directory. For more information, see the API Reference Guide.
Use the wandb artifact get command to download an artifact from the W&B server.
$ wandb artifact get project/artifact:alias --root mnist/
Partially download an artifact
You can optionally download part of an artifact based on a prefix. Using the path_prefix parameter, you can download a single file or the content of a sub-folder.
artifact = run.use_artifact("bike-dataset:latest")
artifact.download(path_prefix="bike.png") # downloads only bike.png
Alternatively, you can download files from a certain directory:
artifact.download(path_prefix="images/bikes/") # downloads files in the images/bikes directory
Use an artifact from a different project
Specify the name of artifact along with its project name to reference an artifact. You can also reference artifacts across entities by specifying the name of the artifact with its entity name.
The following code example demonstrates how to query an artifact from another project as input to the current W&B run.
import wandb
run = wandb.init(project="<example>", job_type="<job-type>")
# Query W&B for an artifact from another project and mark it# as an input to this run.artifact = run.use_artifact("my-project/artifact:alias")
# Use an artifact from another entity and mark it as an input# to this run.artifact = run.use_artifact("my-entity/my-project/artifact:alias")
Construct and use an artifact simultaneously
Simultaneously construct and use an artifact. Create an artifact object and pass it to use_artifact. This creates an artifact in W&B if it does not exist yet. The use_artifact API is idempotent, so you can call it as many times as you like.
Update an existing Artifact inside and outside of a W&B Run.
Pass desired values to update the description, metadata, and alias of an artifact. Call the save() method to update the artifact on the W&B servers. You can update an artifact during a W&B Run or outside of a Run.
Use the W&B Public API (wandb.Api) to update an artifact outside of a run. Use the Artifact API (wandb.Artifact) to update an artifact during a run.
You can not update the alias of artifact linked to a model in Model Registry.
The proceeding code example demonstrates how to update the description of an artifact using the wandb.Artifact API:
import wandb
run = wandb.init(project="<example>")
artifact = run.use_artifact("<artifact-name>:<alias>")
artifact.description ="<description>"artifact.save()
The proceeding code example demonstrates how to update the description of an artifact using the wandb.Api API:
import wandb
api = wandb.Api()
artifact = api.artifact("entity/project/artifact:alias")
# Update the descriptionartifact.description ="My new description"# Selectively update metadata keysartifact.metadata["oldKey"] ="new value"# Replace the metadata entirelyartifact.metadata = {"newKey": "new value"}
# Add an aliasartifact.aliases.append("best")
# Remove an aliasartifact.aliases.remove("latest")
# Completely replace the aliasesartifact.aliases = ["replaced"]
# Persist all artifact modificationsartifact.save()
For more information, see the Weights and Biases Artifact API.
You can also update an Artifact collection in the same way as a singular artifact:
import wandb
run = wandb.init(project="<example>")
api = wandb.Api()
artifact = api.artifact_collection(type="<type-name>", collection="<collection-name>")
artifact.name ="<new-collection-name>"artifact.description ="<This is where you'd describe the purpose of your collection.>"artifact.save()
Use aliases as pointers to specific versions. By default, Run.log_artifact adds the latest alias to the logged version.
An artifact version v0 is created and attached to your artifact when you log an artifact for the first time. W&B checksums the contents when you log again to the same artifact. If the artifact changed, W&B saves a new version v1.
For example, if you want your training script to pull the most recent version of a dataset, specify latest when you use that artifact. The proceeding code example demonstrates how to download a recent dataset artifact named bike-dataset that has an alias, latest:
import wandb
run = wandb.init(project="<example-project>")
artifact = run.use_artifact("bike-dataset:latest")
artifact.download()
You can also apply a custom alias to an artifact version. For example, if you want to mark that model checkpoint is the best on the metric AP-50, you could add the string 'best-ap50' as an alias when you log the model artifact.
Create a new artifact version from a single run or from a distributed process.
Create a new artifact version with a single run or collaboratively with distributed runs. You can optionally create a new artifact version from a previous version known as an incremental artifact.
We recommend that you create an incremental artifact when you need to apply changes to a subset of files in an artifact, where the size of the original artifact is significantly larger.
Create new artifact versions from scratch
There are two ways to create a new artifact version: from a single run and from distributed runs. They are defined as follows:
Single run: A single run provides all the data for a new version. This is the most common case and is best suited when the run fully recreates the needed data. For example: outputting saved models or model predictions in a table for analysis.
Distributed runs: A set of runs collectively provides all the data for a new version. This is best suited for distributed jobs which have multiple runs generating data, often in parallel. For example: evaluating a model in a distributed manner, and outputting the predictions.
W&B will create a new artifact and assign it a v0 alias if you pass a name to the wandb.Artifact API that does not exist in your project. W&B checksums the contents when you log again to the same artifact. If the artifact changed, W&B saves a new version v1.
W&B will retrieve an existing artifact if you pass a name and artifact type to the wandb.Artifact API that matches an existing artifact in your project. The retrieved artifact will have a version greater than 1.
Single run
Log a new version of an Artifact with a single run that produces all the files in the artifact. This case occurs when a single run produces all the files in the artifact.
Based on your use case, select one of the tabs below to create a new artifact version inside or outside of a run:
Create an artifact version within a W&B run:
Create a run with wandb.init. (Line 1)
Create a new artifact or retrieve an existing one with wandb.Artifact . (Line 2)
Add files to the artifact with .add_file. (Line 9)
Log the artifact to the run with .log_artifact. (Line 10)
with wandb.init() as run:
artifact = wandb.Artifact("artifact_name", "artifact_type")
# Add Files and Assets to the artifact using# `.add`, `.add_file`, `.add_dir`, and `.add_reference` artifact.add_file("image1.png")
run.log_artifact(artifact)
Create an artifact version outside of a W&B run:
Create a new artifact or retrieve an existing one with wanb.Artifact. (Line 1)
Add files to the artifact with .add_file. (Line 4)
Save the artifact with .save. (Line 5)
artifact = wandb.Artifact("artifact_name", "artifact_type")
# Add Files and Assets to the artifact using# `.add`, `.add_file`, `.add_dir`, and `.add_reference`artifact.add_file("image1.png")
artifact.save()
Distributed runs
Allow a collection of runs to collaborate on a version before committing it. This is in contrast to single run mode described above where one run provides all the data for a new version.
Each run in the collection needs to be aware of the same unique ID (called distributed_id) in order to collaborate on the same version. By default, if present, W&B uses the run’s group as set by wandb.init(group=GROUP) as the distributed_id.
There must be a final run that “commits” the version, permanently locking its state.
Use upsert_artifact to add to the collaborative artifact and finish_artifact to finalize the commit.
Consider the following example. Different runs (labelled below as Run 1, Run 2, and Run 3) add a different image file to the same artifact with upsert_artifact.
Run 1:
with wandb.init() as run:
artifact = wandb.Artifact("artifact_name", "artifact_type")
# Add Files and Assets to the artifact using# `.add`, `.add_file`, `.add_dir`, and `.add_reference` artifact.add_file("image1.png")
run.upsert_artifact(artifact, distributed_id="my_dist_artifact")
Run 2:
with wandb.init() as run:
artifact = wandb.Artifact("artifact_name", "artifact_type")
# Add Files and Assets to the artifact using# `.add`, `.add_file`, `.add_dir`, and `.add_reference` artifact.add_file("image2.png")
run.upsert_artifact(artifact, distributed_id="my_dist_artifact")
Run 3
Must run after Run 1 and Run 2 complete. The Run that calls finish_artifact can include files in the artifact, but does not need to.
with wandb.init() as run:
artifact = wandb.Artifact("artifact_name", "artifact_type")
# Add Files and Assets to the artifact# `.add`, `.add_file`, `.add_dir`, and `.add_reference` artifact.add_file("image3.png")
run.finish_artifact(artifact, distributed_id="my_dist_artifact")
Create a new artifact version from an existing version
Add, modify, or remove a subset of files from a previous artifact version without the need to re-index the files that didn’t change. Adding, modifying, or removing a subset of files from a previous artifact version creates a new artifact version known as an incremental artifact.
Here are some scenarios for each type of incremental change you might encounter:
add: you periodically add a new subset of files to a dataset after collecting a new batch.
remove: you discovered several duplicate files and want to remove them from your artifact.
update: you corrected annotations for a subset of files and want to replace the old files with the correct ones.
You could create an artifact from scratch to perform the same function as an incremental artifact. However, when you create an artifact from scratch, you will need to have all the contents of your artifact on your local disk. When making an incremental change, you can add, remove, or modify a single file without changing the files from a previous artifact version.
You can create an incremental artifact within a single run or with a set of runs (distributed mode).
Follow the procedure below to incrementally change an artifact:
Obtain the artifact version you want to perform an incremental change on:
Lastly, log or save your changes. The following tabs show you how to save your changes inside and outside of a W&B run. Select the tab that is appropriate for your use case:
run.log_artifact(draft_artifact)
draft_artifact.save()
Putting it all together, the code examples above look like:
with wandb.init(job_type="modify dataset") as run:
saved_artifact = run.use_artifact(
"my_artifact:latest" ) # fetch artifact and input it into your run draft_artifact = saved_artifact.new_draft() # create a draft version# modify a subset of files in the draft version draft_artifact.add_file("file_to_add.txt")
draft_artifact.remove("dir_to_remove/")
run.log_artifact(
artifact
) # log your changes to create a new version and mark it as output to your run
client = wandb.Api()
saved_artifact = client.artifact("my_artifact:latest") # load your artifactdraft_artifact = saved_artifact.new_draft() # create a draft version# modify a subset of files in the draft versiondraft_artifact.remove("deleted_file.txt")
draft_artifact.add_file("modified_file.txt")
draft_artifact.save() # commit changes to the draft
4.1.6 - Track external files
Track files saved outside the W&B such as in an Amazon S3 bucket, GCS bucket, HTTP file server, or even an NFS share.
Use reference artifacts to track files saved outside the W&B system, for example in an Amazon S3 bucket, GCS bucket, Azure blob, HTTP file server, or even an NFS share. Log artifacts outside of a W&B Run with the W&B CLI.
Log artifacts outside of runs
W&B creates a run when you log an artifact outside of a run. Each artifact belongs to a run, which in turn belongs to a project. An artifact (version) also belongs to a collection, and has a type.
Use the wandb artifact put command to upload an artifact to the W&B server outside of a W&B run. Provide the name of the project you want the artifact to belong to along with the name of the artifact (project/artifact_name).Optionally provide the type (TYPE). Replace PATH in the code snippet below with the file path of the artifact you want to upload.
$ wandb artifact put --name project/artifact_name --type TYPE PATH
W&B will create a new project if a the project you specify does not exist. For information on how to download an artifact, see Download and use artifacts.
Track artifacts outside of W&B
Use W&B Artifacts for dataset versioning and model lineage, and use reference artifacts to track files saved outside the W&B server. In this mode an artifact only stores metadata about the files, such as URLs, size, and checksums. The underlying data never leaves your system. See the Quick start for information on how to save files and directories to W&B servers instead.
The following describes how to construct reference artifacts and how to best incorporate them into your workflows.
Amazon S3 / GCS / Azure Blob Storage References
Use W&B Artifacts for dataset and model versioning to track references in cloud storage buckets. With artifact references, seamlessly layer tracking on top of your buckets with no modifications to your existing storage layout.
Artifacts abstract away the underlying cloud storage vendor (such AWS, GCP or Azure). Information described in the proceeding section apply uniformly to Amazon S3, Google Cloud Storage and Azure Blob Storage.
W&B Artifacts support any Amazon S3 compatible interface, including MinIO. The scripts below work as-is, when you set the AWS_S3_ENDPOINT_URL environment variable to point at your MinIO server.
Assume we have a bucket with the following structure:
By default, W&B imposes a 10,000 object limit when adding an object prefix. You can adjust this limit by specifying max_objects= in calls to add_reference.
Our new reference artifact mnist:latest looks and behaves similarly to a regular artifact. The only difference is that the artifact only consists of metadata about the S3/GCS/Azure object such as its ETag, size, and version ID (if object versioning is enabled on the bucket).
W&B will use the default mechanism to look for credentials based on the cloud provider you use. Read the documentation from your cloud provider to learn more about the credentials used:
For AWS, if the bucket is not located in the configured user’s default region, you must set the AWS_REGION environment variable to match the bucket region.
Interact with this artifact similarly to a normal artifact. In the App UI, you can look through the contents of the reference artifact using the file browser, explore the full dependency graph, and scan through the versioned history of your artifact.
Rich media such as images, audio, video, and point clouds may fail to render in the App UI depending on the CORS configuration of your bucket. Allow listing app.wandb.ai in your bucket’s CORS settings will allow the App UI to properly render such rich media.
Panels might fail to render in the App UI for private buckets. If your company has a VPN, you could update your bucket’s access policy to whitelist IPs within your VPN.
W&B will use the metadata recorded when the artifact was logged to retrieve the files from the underlying bucket when it downloads a reference artifact. If your bucket has object versioning enabled, W&B will retrieve the object version corresponding to the state of the file at the time an artifact was logged. This means that as you evolve the contents of your bucket, you can still point to the exact iteration of your data a given model was trained on since the artifact serves as a snapshot of your bucket at the time of training.
W&B recommends that you enable ‘Object Versioning’ on your storage buckets if you overwrite files as part of your workflow. With versioning enabled on your buckets, artifacts with references to files that have been overwritten will still be intact because the older object versions are retained.
Based on your use case, read the instructions to enable object versioning: AWS, GCP, Azure.
Tying it together
The following code example demonstrates a simple workflow you can use to track a dataset in Amazon S3, GCS, or Azure that feeds into a training job:
import wandb
run = wandb.init()
artifact = wandb.Artifact("mnist", type="dataset")
artifact.add_reference("s3://my-bucket/datasets/mnist")
# Track the artifact and mark it as an input to# this run in one swoop. A new artifact version# is only logged if the files in the bucket changed.run.use_artifact(artifact)
artifact_dir = artifact.download()
# Perform training here...
To track models, we can log the model artifact after the training script uploads the model files to the bucket:
Another common pattern for fast access to datasets is to expose an NFS mount point to a remote filesystem on all machines running training jobs. This can be an even simpler solution than a cloud storage bucket because from the perspective of the training script, the files look just like they are sitting on your local filesystem. Luckily, that ease of use extends into using Artifacts to track references to file systems, whether they are mounted or not.
Assume we have a filesystem mounted at /mount with the following structure:
mount
+-- datasets/
| +-- mnist/
+-- models/
+-- cnn/
Under mnist/ we have our dataset, a collection of images. Let’s track it with an artifact:
By default, W&B imposes a 10,000 file limit when adding a reference to a directory. You can adjust this limit by specifying max_objects= in calls to add_reference.
Note the triple slash in the URL. The first component is the file:// prefix that denotes the use of filesystem references. The second is the path to our dataset, /mount/datasets/mnist/.
The resulting artifact mnist:latest looks and acts just like a regular artifact. The only difference is that the artifact only consists of metadata about the files, such as their sizes and MD5 checksums. The files themselves never leave your system.
You can interact with this artifact just as you would a normal artifact. In the UI, you can browse the contents of the reference artifact using the file browser, explore the full dependency graph, and scan through the versioned history of your artifact. However, the UI will not be able to render rich media such as images, audio, etc. as the data itself is not contained within the artifact.
For filesystem references, a download() operation copies the files from the referenced paths to construct the artifact directory. In the above example, the contents of /mount/datasets/mnist will be copied into the directory artifacts/mnist:v0/. If an artifact contains a reference to a file that was overwritten, then download() will throw an error as the artifact can no longer be reconstructed.
Putting everything together, here’s a simple workflow you can use to track a dataset under a mounted filesystem that feeds into a training job:
import wandb
run = wandb.init()
artifact = wandb.Artifact("mnist", type="dataset")
artifact.add_reference("file:///mount/datasets/mnist/")
# Track the artifact and mark it as an input to# this run in one swoop. A new artifact version# is only logged if the files under the directory# changed.run.use_artifact(artifact)
artifact_dir = artifact.download()
# Perform training here...
To track models, we can log the model artifact after the training script writes the model files to the mount point:
import wandb
run = wandb.init()
# Training here...# Write model to diskmodel_artifact = wandb.Artifact("cnn", type="model")
model_artifact.add_reference("file:///mount/cnn/my_model.h5")
run.log_artifact(model_artifact)
4.1.7 - Manage data
4.1.7.1 - Delete an artifact
Delete artifacts interactively with the App UI or programmatically with the W&B SDK/
Delete artifacts interactively with the App UI or programmatically with the W&B SDK. When you delete an artifact, W&B marks that artifact as a soft-delete. In other words, the artifact is marked for deletion but files are not immediately deleted from storage.
The contents of the artifact remain as a soft-delete, or pending deletion state, until a regularly run garbage collection process reviews all artifacts marked for deletion. The garbage collection process deletes associated files from storage if the artifact and its associated files are not used by a previous or subsequent artifact versions.
The sections in this page describe how to delete specific artifact versions, how to delete an artifact collection, how to delete artifacts with and without aliases, and more. You can schedule when artifacts are deleted from W&B with TTL policies. For more information, see Manage data retention with Artifact TTL policy.
Artifacts that are scheduled for deletion with a TTL policy, deleted with the W&B SDK, or deleted with the W&B App UI are first soft-deleted. Artifacts that are soft deleted undergo garbage collection before they are hard-deleted.
Delete an artifact version
To delete an artifact version:
Select the name of the artifact. This will expand the artifact view and list all the artifact versions associated with that artifact.
From the list of artifacts, select the artifact version you want to delete.
On the right hand side of the workspace, select the kebab dropdown.
Choose Delete.
An artifact version can also be deleted programatically via the delete() method. See the examples below.
Delete multiple artifact versions with aliases
The following code example demonstrates how to delete artifacts that have aliases associated with them. Provide the entity, project name, and run ID that created the artifacts.
import wandb
run = api.run("entity/project/run_id")
for artifact in run.logged_artifacts():
artifact.delete()
Set the delete_aliases parameter to the boolean value, True to delete aliases if the artifact has one or more aliases.
import wandb
run = api.run("entity/project/run_id")
for artifact in run.logged_artifacts():
# Set delete_aliases=True in order to delete# artifacts with one more aliases artifact.delete(delete_aliases=True)
Delete multiple artifact versions with a specific alias
The proceeding code demonstrates how to delete multiple artifact versions that have a specific alias. Provide the entity, project name, and run ID that created the artifacts. Replace the deletion logic with your own:
import wandb
runs = api.run("entity/project_name/run_id")
# Delete artifact ith alias 'v3' and 'v4for artifact_version in runs.logged_artifacts():
# Replace with your own deletion logic.if artifact_version.name[-2:] =="v3"or artifact_version.name[-2:] =="v4":
artifact.delete(delete_aliases=True)
Delete all versions of an artifact that do not have an alias
The following code snippet demonstrates how to delete all versions of an artifact that do not have an alias. Provide the name of the project and entity for the project and entity keys in wandb.Api, respectively. Replace the <> with the name of your artifact:
import wandb
# Provide your entity and a project name when you# use wandb.Api methods.api = wandb.Api(overrides={"project": "project", "entity": "entity"})
artifact_type, artifact_name ="<>"# provide type and namefor v in api.artifact_versions(artifact_type, artifact_name):
# Clean up versions that don't have an alias such as 'latest'.# NOTE: You can put whatever deletion logic you want here.if len(v.aliases) ==0:
v.delete()
Delete an artifact collection
To delete an artifact collection:
Navigate to the artifact collection you want to delete and hover over it.
Select the kebab dropdown next to the artifact collection name.
Choose Delete.
You can also delete artifact collection programmatically with the delete() method. Provide the name of the project and entity for the project and entity keys in wandb.Api, respectively:
import wandb
# Provide your entity and a project name when you# use wandb.Api methods.api = wandb.Api(overrides={"project": "project", "entity": "entity"})
collection = api.artifact_collection(
"<artifact_type>", "entity/project/artifact_collection_name")
collection.delete()
How to enable garbage collection based on how W&B is hosted
Garbage collection is enabled by default if you use W&B’s shared cloud. Based on how you host W&B, you might need to take additional steps to enable garbage collection, this includes:
Set the GORILLA_ARTIFACT_GC_ENABLED environment variable to true: GORILLA_ARTIFACT_GC_ENABLED=true
Enable bucket versioning if you use AWS, GCP or any other storage provider such as Minio. If you use Azure, enable soft deletion.
Soft deletion in Azure is equivalent to bucket versioning in other storage providers.
The following table describes how to satisfy requirements to enable garbage collection based on your deployment type.
Schedule when artifacts are deleted from W&B with W&B Artifact time-to-live (TTL) policy. When you delete an artifact, W&B marks that artifact as a soft-delete. In other words, the artifact is marked for deletion but files are not immediately deleted from storage. For more information on how W&B deletes artifacts, see the Delete artifacts page.
Check out this video tutorial to learn how to manage data retention with Artifacts TTL in the W&B App.
W&B deactivates the option to set a TTL policy for model artifacts linked to the Model Registry. This is to help ensure that linked models do not accidentally expire if used in production workflows.
Only team admins can view a team’s settings and access team level TTL settings such as (1) permitting who can set or edit a TTL policy or (2) setting a team default TTL.
If you do not see the option to set or edit a TTL policy in an artifact’s details in the W&B App UI or if setting a TTL programmatically does not successfully change an artifact’s TTL property, your team admin has not given you permissions to do so.
Auto-generated Artifacts
Only user-generated artifacts can use TTL policies. Artifacts auto-generated by W&B cannot have TTL policies set for them.
The following Artifact types indicate an auto-generated Artifact:
run_table
code
job
Any Artifact type starting with: wandb-*
You can check an Artifact’s type on the W&B platform or programmatically:
import wandb
run = wandb.init(project="<my-project-name>")
artifact = run.use_artifact(artifact_or_name="<my-artifact-name>")
print(artifact.type)
Replace the values enclosed with <> with your own.
Define who can edit and set TTL policies
Define who can set and edit TTL policies within a team. You can either grant TTL permissions only to team admins, or you can grant both team admins and team members TTL permissions.
Only team admins can define who can set or edit a TTL policy.
Navigate to your team’s profile page.
Select the Settings tab.
Navigate to the Artifacts time-to-live (TTL) section.
From the TTL permissions dropdown, select who can set and edit TTL policies.
Click on Review and save settings.
Confirm the changes and select Save settings.
Create a TTL policy
Set a TTL policy for an artifact either when you create the artifact or retroactively after the artifact is created.
For all the code snippets below, replace the content wrapped in <> with your information to use the code snippet.
Set a TTL policy when you create an artifact
Use the W&B Python SDK to define a TTL policy when you create an artifact. TTL policies are typically defined in days.
Defining a TTL policy when you create an artifact is similar to how you normally create an artifact. With the exception that you pass in a time delta to the artifact’s ttl attribute.
The following code snippet shows how to set a TTL policy for an artifact:
import wandb
from datetime import timedelta
artifact = run.use_artifact("<my-entity/my-project/my-artifact:alias>")
artifact.ttl = timedelta(days=365*2) # Delete in two yearsartifact.save()
The preceding code example sets the TTL policy to two years.
Navigate to your W&B project in the W&B App UI.
Select the artifact icon on the left panel.
From the list of artifacts, expand the artifact type you
Select on the artifact version you want to edit the TTL policy for.
Click on the Version tab.
From the dropdown, select Edit TTL policy.
Within the modal that appears, select Custom from the TTL policy dropdown.
Within the TTL duration field, set the TTL policy in units of days.
Select the Update TTL button to save your changes.
Set default TTL policies for a team
Only team admins can set a default TTL policy for a team.
Set a default TTL policy for your team. Default TTL policies apply to all existing and future artifacts based on their respective creation dates. Artifacts with existing version-level TTL policies are not affected by the team’s default TTL.
Navigate to your team’s profile page.
Select the Settings tab.
Navigate to the Artifacts time-to-live (TTL) section.
Click on the Set team’s default TTL policy.
Within the Duration field, set the TTL policy in units of days.
Click on Review and save settings.
7/ Confirm the changes and then select Save settings.
Set a TTL policy outside of a run
Use the public API to retrieve an artifact without fetching a run, and set the TTL policy. TTL policies are typically defined in days.
The following code sample shows how to fetch an artifact using the public API and set the TTL policy.
api = wandb.Api()
artifact = api.artifact("entity/project/artifact:alias")
artifact.ttl = timedelta(days=365) # Delete in one yearartifact.save()
Deactivate a TTL policy
Use the W&B Python SDK or W&B App UI to deactivate a TTL policy for a specific artifact version.
Within your project, select the Artifacts tab in the left sidebar.
Click on a collection.
Within the collection view you can see all of the artifacts in the selected collection. Within the Time to Live column you will see the TTL policy assigned to that artifact.
4.1.7.3 - Manage artifact storage and memory allocation
Manage storage, memory allocation of W&B Artifacts.
W&B stores artifact files in a private Google Cloud Storage bucket located in the United States by default. All files are encrypted at rest and in transit.
During training, W&B locally saves logs, artifacts, and configuration files in the following local directories:
File
Default location
To change default location set:
logs
./wandb
dir in wandb.init or set the WANDB_DIR environment variable
artifacts
~/.cache/wandb
the WANDB_CACHE_DIR environment variable
configs
~/.config/wandb
the WANDB_CONFIG_DIR environment variable
Depending on the machine on wandb is initialized on, these default folders may not be located in a writeable part of the file system. This might trigger an error.
Clean up local artifact cache
W&B caches artifact files to speed up downloads across versions that share files in common. Over time this cache directory can become large. Run the wandb artifact cache cleanup command to prune the cache and to remove any files that have not been used recently.
The proceeding code snippet demonstrates how to limit the size of the cache to 1GB. Copy and paste the code snippet into your terminal:
$ wandb artifact cache cleanup 1GB
4.1.8 - Explore artifact graphs
Traverse automatically created direct acyclic W&B Artifact graphs.
W&B automatically tracks the artifacts a given run logged as well as the artifacts a given run uses. These artifacts can include datasets, models, evaluation results, or more. You can explore an artifact’s lineage to track and manage the various artifacts produced throughout the machine learning lifecycle.
Lineage
Tracking an artifact’s lineage has several key benefits:
Reproducibility: By tracking the lineage of all artifacts, teams can reproduce experiments, models, and results, which is essential for debugging, experimentation, and validating machine learning models.
Version Control: Artifact lineage involves versioning artifacts and tracking their changes over time. This allows teams to roll back to previous versions of data or models if needed.
Auditing: Having a detailed history of the artifacts and their transformations enables organizations to comply with regulatory and governance requirements.
Collaboration and Knowledge Sharing: Artifact lineage facilitates better collaboration among team members by providing a clear record of attempts as well as what worked, and what didn’t. This helps in avoiding duplication of efforts and accelerates the development process.
Finding an artifact’s lineage
When selecting an artifact in the Artifacts tab, you can see your artifact’s lineage. This graph view shows a general overview of your pipeline.
To view an artifact graph:
Navigate to your project in the W&B App UI
Choose the artifact icon on the left panel.
Select Lineage.
Navigating the lineage graph
The artifact or job type you provide appears in front of its name, with artifacts represented by blue icons and runs represented by green icons. Arrows detail the input and output of a run or artifact on the graph.
You can view the type and the name of artifact in both the left sidebar and in the Lineage tab.
For a more detailed view, click any individual artifact or run to get more information on a particular object.
Artifact clusters
When a level of the graph has five or more runs or artifacts, it creates a cluster. A cluster has a search bar to find specific versions of runs or artifacts and pulls an individual node from a cluster to continue investigating the lineage of a node inside a cluster.
Clicking on a node opens a preview with an overview of the node. Clicking on the arrow extracts the individual run or artifact so you can examine the lineage of the extracted node.
Create an artifact. First, create a run with wandb.init. Then,create a new artifact or retrieve an existing one with wandb.Artifact. Next, add files to the artifact with .add_file. Finally, log the artifact to the run with .log_artifact. The finished code looks something like this:
with wandb.init() as run:
artifact = wandb.Artifact("artifact_name", "artifact_type")
# Add Files and Assets to the artifact using# `.add`, `.add_file`, `.add_dir`, and `.add_reference` artifact.add_file("image1.png")
run.log_artifact(artifact)
Use the artifact object’s logged_by and used_by methods to walk the graph from the artifact:
# Walk up and down the graph from an artifact:producer_run = artifact.logged_by()
consumer_runs = artifact.used_by()
Learn where W&B files are stored by default. Explore how to save, store sensitive information.
Files are uploaded to Google Cloud bucket managed by W&B when you log artifacts. The contents of the bucket are encrypted both at rest and in transit. Artifact files are only visible to users who have access to the corresponding project.
When you delete a version of an artifact, it is marked for soft deletion in our database and removed from your storage cost. When you delete an entire artifact, it is queued for permanently deletion and all of its contents are removed from the W&B bucket. If you have specific needs around file deletion please reach out to Customer Support.
For sensitive datasets that cannot reside in a multi-tenant environment, you can use either a private W&B server connected to your cloud bucket or reference artifacts. Reference artifacts track references to private buckets without sending file contents to W&B. Reference artifacts maintain links to files on your buckets or servers. In other words, W&B only keeps track of the metadata associated with the files and not the files themselves.
Create a reference artifact similar to how you create a non reference artifact:
import wandb
run = wandb.init()
artifact = wandb.Artifact("animals", type="dataset")
artifact.add_reference("s3://my-bucket/animals")
For alternatives, contact us at contact@wandb.com to talk about private cloud and on-premises installations.
4.1.10 - Tutorial: Create, track, and use a dataset artifact
Artifacts quickstart shows how to create, track, and use a dataset artifact with W&B.
This walkthrough demonstrates how to create, track, and use a dataset artifact from W&B Runs.
1. Log into W&B
Import the W&B library and log in to W&B. You will need to sign up for a free W&B account if you have not done so already.
import wandb
wandb.login()
2. Initialize a run
Use the wandb.init() API to generate a background process to sync and log data as a W&B Run. Provide a project name and a job type:
# Create a W&B Run. Here we specify 'dataset' as the job type since this example# shows how to create a dataset artifact.run = wandb.init(project="artifacts-example", job_type="upload-dataset")
3. Create an artifact object
Create an artifact object with the wandb.Artifact() API. Provide a name for the artifact and a description of the file type for the name and type parameters, respectively.
For example, the following code snippet demonstrates how to create an artifact called ‘bicycle-dataset’ with a ‘dataset’ label:
For more information about how to construct an artifact, see Construct artifacts.
Add the dataset to the artifact
Add a file to the artifact. Common file types include models and datasets. The following example adds a dataset named dataset.h5 that is saved locally on our machine to the artifact:
# Add a file to the artifact's contentsartifact.add_file(local_path="dataset.h5")
Replace the filename dataset.h5 in the preceding code snippet with the path to the file you want to add to the artifact.
4. Log the dataset
Use the W&B run objects log_artifact() method to both save your artifact version and declare the artifact as an output of the run.
# Save the artifact version to W&B and mark it# as the output of this runrun.log_artifact(artifact)
A 'latest' alias is created by default when you log an artifact. For more information about artifact aliases and versions, see Create a custom alias and Create new artifact versions, respectively.
5. Download and use the artifact
The following code example demonstrates the steps you can take to use an artifact you have logged and saved to the W&B servers.
First, initialize a new run object with wandb.init().
Second, use the run objects use_artifact() method to tell W&B what artifact to use. This returns an artifact object.
Third, use the artifacts download() method to download the contents of the artifact.
# Create a W&B Run. Here we specify 'training' for 'type'# because we will use this run to track training.run = wandb.init(project="artifacts-example", job_type="training")
# Query W&B for an artifact and mark it as input to this runartifact = run.use_artifact("bicycle-dataset:latest")
# Download the artifact's contentsartifact_dir = artifact.download()
Alternatively, you can use the Public API (wandb.Api) to export (or update data) data already saved in a W&B outside of a Run. See Track external files for more information.
4.2 - Tables
Iterate on datasets and understand model predictions
A Table is a two-dimensional grid of data where each column has a single type of data. Tables support primitive and numeric types, as well as nested lists, dictionaries, and rich media types.
import wandb
run = wandb.init(project="table-test")
# Create and log a new table.my_table = wandb.Table(columns=["a", "b"], data=[["a1", "b1"], ["a2", "b2"]])
run.log({"Table Name": my_table})
Pass a Pandas Dataframe to wandb.Table() to create a new table.
import wandb
import pandas as pd
df = pd.read_csv("my_data.csv")
run = wandb.init(project="df-table")
my_table = wandb.Table(dataframe=df)
wandb.log({"Table Name": my_table})
For more information on supported data types, see the wandb.Table in the W&B API Reference Guide.
2. Visualize tables in your project workspace
View the resulting table in your workspace.
Navigate to your project in the W&B App.
Select the name of your run in your project workspace. A new panel is added for each unique table key.
In this example, my_table, is logged under the key "Table Name".
3. Compare across model versions
Log sample tables from multiple W&B Runs and compare results in the project workspace. In this example workspace, we show how to combine rows from multiple different versions in the same table.
Use the table filter, sort, and grouping features to explore and evaluate model results.
4.2.2 - Visualize and analyze tables
Visualize and analyze W&B Tables.
Customize your W&B Tables to answer questions about your machine learning model’s performance, analyze your data, and more.
Interactively explore your data to:
Compare changes precisely across models, epochs, or individual examples
Understand higher-level patterns in your data
Capture and communicate your insights with visual samples
W&B Tables posses the following behaviors:
Stateless in an artifact context: any table logged alongside an artifact version resets to its default state after you close the browser window
Stateful in a workspace or report context: any changes you make to a table in a single run workspace, multi-run project workspace, or Report persists.
For information on how to save your current W&B Table view, see Save your view.
How to view two tables
Compare two tables with a merged view or a side-by-side view. For example, the image below demonstrates a table comparison of MNIST data.
Follow these steps to compare two tables:
Go to your project in the W&B App.
Select the artifacts icon on the left panel.
Select an artifact version.
In the following image we demonstrate a model’s predictions on MNIST validation data after each of five epochs (view interactive example here).
Hover over the second artifact version you want to compare in the sidebar and click Compare when it appears. For example, in the image below we select a version labeled as “v4” to compare to MNIST predictions made by the same model after 5 epochs of training.
Merged view
Initially you see both tables merged together. The first table selected has index 0 and a blue highlight, and the second table has index 1 and a yellow highlight. View a live example of merged tables here.
From the merged view, you can
choose the join key: use the dropdown at the top left to set the column to use as the join key for the two tables. Typically this is the unique identifier of each row, such as the filename of a specific example in your dataset or an incrementing index on your generated samples. Note that it’s currently possible to select any column, which may yield illegible tables and slow queries.
concatenate instead of join: select “concatenating all tables” in this dropdown to union all the rows from both tables into one larger Table instead of joining across their columns
reference each Table explicitly: use 0, 1, and * in the filter expression to explicitly specify a column in one or both table instances
visualize detailed numerical differences as histograms: compare the values in any cell at a glance
Side-by-side view
To view the two tables side-by-side, change the first dropdown from “Merge Tables: Table” to “List of: Table” and then update the “Page size” respectively. Here the first Table selected is on the left and the second one is on the right. Also, you can compare these tables vertically as well by clicking on the “Vertical” checkbox.
compare the tables at a glance: apply any operations (sort, filter, group) to both tables in tandem and spot any changes or differences quickly. For example, view the incorrect predictions grouped by guess, the hardest negatives overall, the confidence score distribution by true label, etc.
explore two tables independently: scroll through and focus on the side/rows of interest
Log a table in an artifact for each meaningful step of training to analyze model performance over training time. For example, you could log a table at the end of every validation step, after every 50 epochs of training, or any frequency that makes sense for your pipeline. Use the side-by-side view to visualize changes in model predictions.
For a more detailed walkthrough of visualizing predictions across training time, see this report and this interactive notebook example.
Compare tables across model variants
Compare two artifact versions logged at the same step for two different models to analyze model performance across different configurations (hyperparameters, base architectures, and so forth).
For example, compare predictions between a baseline and a new model variant, 2x_layers_2x_lr, where the first convolutional layer doubles from 32 to 64, the second from 128 to 256, and the learning rate from 0.001 to 0.002. From this live example, use the side-by-side view and filter down to the incorrect predictions after 1 (left tab) versus 5 training epochs (right tab).
Save your view
Tables you interact with in the run workspace, project workspace, or a report automatically saves their view state. If you apply any table operations then close your browser, the table retains the last viewed configuration when you next navigate to the table.
Tables you interact with in the artifact context remains stateless.
To save a table from a workspace in a particular state, export it to a W&B Report. To export a table to report:
Select the kebob icon (three vertical dots) in the top right corner of your workspace visualization panel.
Select either Share panel or Add to report.
Examples
These reports highlight the different use cases of W&B Tables:
The following sections highlight some of the ways you can use tables:
View your data
Log metrics and rich media during model training or evaluation, then visualize results in a persistent database synced to the cloud, or to your hosting instance.
View, sort, filter, group, join, and query tables to understand your data and model performance—no need to browse static files or rerun analysis scripts.
Zoom in to visualize a specific prediction at a specific step. Zoom out to see the aggregate statistics, identify patterns of errors, and understand opportunities for improvement. This tool works for comparing steps from a single model training, or results across different model versions.
Interact with audio tables in this report on timbre transfer. You can compare a recorded whale song with a synthesized rendition of the same melody on an instrument like violin or trumpet. You can also record your own songs and explore their synthesized versions in W&B with this colab.
Text
Browse text samples from training data or generated output, dynamically group by relevant fields, and align your evaluation across model variants or experiment settings. Render text as Markdown or use visual diff mode to compare texts. Explore a simple character-based RNN for generating Shakespeare in this report.
Video
Browse and aggregate over videos logged during training to understand your models. Here is an early example using the SafeLife benchmark for RL agents seeking to minimize side effects
Like all W&B Artifacts, Tables can be converted into pandas dataframes for easy data exporting.
Convert table to artifact
First, you’ll need to convert the table to an artifact. The easiest way to do this using artifact.get(table, "table_name"):
# Create and log a new table.with wandb.init() as r:
artifact = wandb.Artifact("my_dataset", type="dataset")
table = wandb.Table(
columns=["a", "b", "c"], data=[(i, i *2, 2**i) for i in range(10)]
)
artifact.add(table, "my_table")
wandb.log_artifact(artifact)
# Retrieve the created table using the artifact you created.with wandb.init() as r:
artifact = r.use_artifact("my_dataset:latest")
table = artifact.get("my_table")
Convert artifact to Dataframe
Then, convert the table into a dataframe:
# Following from the last code example:df = table.get_dataframe()
Export Data
Now you can export using any method dataframe supports:
# Converting the table data to .csvdf.to_csv("example.csv", encoding="utf-8")
Share updates with collaborators, either as a LaTeX zip file a PDF.
The following image shows a section of a report created from metrics that were logged to W&B over the course of training.
View the report where the above image was taken from here.
How it works
Create a collaborative report with a few clicks.
Navigate to your W&B project workspace in the W&B App.
Click the Create report button in the upper right corner of your workspace.
A modal titled Create Report will appear. Select the charts and panels you want to add to your report. (You can add or remove charts and panels later).
Click Create report.
Edit the report to your desired state.
Click Publish to project.
Click the Share button to share your report with collaborators.
See the Create a report page for more information on how to create reports interactively an programmatically with the W&B Python SDK.
How to get started
Depending on your use case, explore the following resources to get started with W&B Reports:
Navigate to your project workspace in the W&B App.
Click Create report in the upper right corner of your workspace.
A modal will appear. Select the charts you would like to start with. You can add or delete charts later from the report interface.
Select the Filter run sets option to prevent new runs from being added to your report. You can toggle this option on or off. Once you click Create report, a draft report will be available in the report tab to continue working on.
Navigate to your project workspace in the W&B App.
Select to the Reports tab (clipboard image) in your project.
Select the Create Report button on the report page.
Create a report programmatically with the wandb library.
Install W&B SDK and Workspaces API:
pip install wandb wandb-workspaces
Next, import workspaces
import wandb
import wandb_workspaces.reports.v2 as wr
Create a report with wandb_workspaces.reports.v2.Report. Create a report instance with the Report Class Public API (wandb.apis.reports). Specify a name for the project.
report = wr.Report(project="report_standard")
Save the report. Reports are not uploaded to the W&B server until you call the .save() method:
report.save()
For information on how to edit a report interactively with the App UI or programmatically, see Edit a report.
4.3.2 - Edit a report
Edit a report interactively with the App UI or programmatically with the W&B SDK.
Edit a report interactively with the App UI or programmatically with the W&B SDK.
Reports consist of blocks. Blocks make up the body of a report. Within these blocks you can add text, images, embedded visualizations, plots from experiments and run, and panels grids.
Panel grids are a specific type of block that hold panels and run sets. Run sets are a collection of runs logged to a project in W&B. Panels are visualizations of run set data.
Ensure that you have wandb-workspaces installed in addition to the W&B Python SDK if you want to programmatically edit a report:
pip install wandb wandb-workspaces
Add plots
Each panel grid has a set of run sets and a set of panels. The run sets at the bottom of the section control what data shows up on the panels in the grid. Create a new panel grid if you want to add charts that pull data from a different set of runs.
Enter a forward slash (/) in the report to display a dropdown menu. Select Add panel to add a panel. You can add any panel that is supported by W&B, including a line plot, scatter plot or parallel coordinates chart.
Add plots to a report programmatically with the SDK. Pass a list of one or more plot or chart objects to the panels parameter in the PanelGrid Public API Class. Create a plot or chart object with its associated Python Class.
The proceeding examples demonstrates how to create a line plot and scatter plot.
For more information about available plots and charts you can add to a report programmatically, see wr.panels.
Add run sets
Add run sets from projects interactively with the App UI or the W&B SDK.
Enter a forward slash (/) in the report to display a dropdown menu. From the dropdown, choose Panel Grid. This will automatically import the run set from the project the report was created from.
Add run sets from projects with the wr.Runset() and wr.PanelGrid Classes. The proceeding procedure describes how to add a runset:
Create a wr.Runset() object instance. Provide the name of the project that contains the runsets for the project parameter and the entity that owns the project for the entity parameter.
Create a wr.PanelGrid() object instance. Pass a list of one or more runset objects to the runsets parameter.
Store one or more wr.PanelGrid() object instances in a list.
Update the report instance blocks attribute with the list of panel grid instances.
Add code blocks to your report interactively with the App UI or with the W&B SDK.
Enter a forward slash (/) in the report to display a dropdown menu. From the dropdown choose Code.
Select the name of the programming language on the right hand of the code block. This will expand a dropdown. From the dropdown, select your programming language syntax. You can choose from Javascript, Python, CSS, JSON, HTML, Markdown, and YAML.
Use the wr.CodeBlock Class to create a code block programmatically. Provide the name of the language and the code you want to display for the language and code parameters, respectively.
For example the proceeding example demonstrates a list in YAML file:
Add markdown to your report interactively with the App UI or with the W&B SDK.
Enter a forward slash (/) in the report to display a dropdown menu. From the dropdown choose Markdown.
Use the wandb.apis.reports.MarkdownBlock Class to create a markdown block programmatically. Pass a string to the text parameter:
import wandb
import wandb_workspaces.reports.v2 as wr
report = wr.Report(project="report-editing")
report.blocks = [
wr.MarkdownBlock(text="Markdown cell with *italics* and **bold** and $e=mc^2$")
]
This will render a markdown block similar to:
Add HTML elements
Add HTML elements to your report interactively with the App UI or with the W&B SDK.
Enter a forward slash (/) in the report to display a dropdown menu. From the dropdown select a type of text block. For example, to create an H2 heading block, select the Heading 2 option.
Pass a list of one or more HTML elements to wandb.apis.reports.blocks attribute. The proceeding example demonstrates how to create an H1, H2, and an unordered list:
This will render a HTML elements to the following:
Embed rich media links
Embed rich media within the report with the App UI or with the W&B SDK.
Copy and past URLs into reports to embed rich media within the report. The following animations demonstrate how to copy and paste URLs from Twitter, YouTube, and SoundCloud.
Twitter
Copy and paste a Tweet link URL into a report to view the Tweet within the report.
Youtube
Copy and paste a YouTube video URL link to embed a video in the report.
SoundCloud
Copy and paste a SoundCloud link to embed an audio file into a report.
Pass a list of one or more embedded media objects to the wandb.apis.reports.blocks attribute. The proceeding example demonstrates how to embed video and Twitter media into a report:
import wandb
import wandb_workspaces.reports.v2 as wr
report = wr.Report(project="report-editing")
report.blocks = [
wr.Video(url="https://www.youtube.com/embed/6riDJMI-Y8U"),
wr.Twitter(
embed_html='<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The voice of an angel, truly. <a href="https://twitter.com/hashtag/MassEffect?src=hash&ref_src=twsrc%5Etfw">#MassEffect</a> <a href="https://t.co/nMev97Uw7F">pic.twitter.com/nMev97Uw7F</a></p>— Mass Effect (@masseffect) <a href="https://twitter.com/masseffect/status/1428748886655569924?ref_src=twsrc%5Etfw">August 20, 2021</a></blockquote>\n' ),
]
report.save()
Duplicate and delete panel grids
If you have a layout that you would like to reuse, you can select a panel grid and copy-paste it to duplicate it in the same report or even paste it into a different report.
Highlight a whole panel grid section by selecting the drag handle in the upper right corner. Click and drag to highlight and select a region in a report such as panel grids, text, and headings.
Select a panel grid and press delete on your keyboard to delete a panel grid.
Collapse headers to organize Reports
Collapse headers in a Report to hide content within a text block. When the report is loaded, only headers that are expanded will show content. Collapsing headers in reports can help organize your content and prevent excessive data loading. The proceeding gif demonstrates the process.
4.3.3 - Collaborate on reports
Collaborate and share W&B Reports with peers, co-workers, and your team.
Once you have saved a report, you can select the Share button to collaborate. A draft copy of the report is created when you select the Edit button. Draft reports auto-save. Select Save to report to publish your changes to the shared report.
A warning notification will appear if an edit conflict occurs. This can occur if you and another collaborator edit the same report at the same time. The warning notification will guide you to resolve potential edit conflicts.
Comment on reports
Click the comment button on a panel in a report to add a comment directly to that panel.
4.3.4 - Clone and export reports
Export a W&B Report as a PDF or LaTeX.
Export reports
Export a report as a PDF or LaTeX. Within your report, select the kebab icon to expand the dropdown menu. Choose Download and select either PDF or LaTeX output format.
Cloning reports
Within your report, select the kebab icon to expand the dropdown menu. Choose the Clone this report button. Pick a destination for your cloned report in the modal. Choose Clone report.
Clone a report to reuse a project’s template and format. Cloned projects are visible to your team if you clone a project within the team’s account. Projects cloned within an individual’s account are only visible to that user.
Embed W&B reports directly into Notion or with an HTML IFrame element.
HTML iframe element
Select the Share button on the upper right hand corner within a report. A modal window will appear. Within the modal window, select Copy embed code. The copied code will render within an Inline Frame (IFrame) HTML element. Paste the copied code into an iframe HTML element of your choice.
Only public reports are viewable when embedded.
Confluence
The proceeding animation demonstrates how to insert the direct link to the report within an IFrame cell in Confluence.
Notion
The proceeding animation demonstrates how to insert a report into a Notion document using an Embed block in Notion and the report’s embedded code.
Gradio
You can use the gr.HTML element to embed W&B Reports within Gradio Apps and use them within Hugging Face Spaces.
import gradio as gr
defwandb_report(url):
iframe =f'<iframe src={url} style="border:none;height:1024px;width:100%">'return gr.HTML(iframe)
with gr.Blocks() as demo:
report = wandb_report(
"https://wandb.ai/_scott/pytorch-sweeps-demo/reports/loss-22-10-07-16-00-17---VmlldzoyNzU2NzAx" )
demo.launch()
4.3.6 - Compare runs across projects
Compare runs from two different projects with cross-project reports.
Compare runs from two different projects with cross-project reports. Use the project selector in the run set table to pick a project.
The visualizations in the section pull columns from the first active runset. Make sure that the first run set checked in the section has that column available if you do not see the metric you are looking for in the line plot.
This feature supports history data on time series lines, but we don’t support pulling different summary metrics from different projects. In other words, you can not create a scatter plot from columns that are only logged in another project.
If you need to compare runs from two projects and the columns are not working, add a tag to the runs in one project and then move those runs to the other project. You can still filter only the runs from each project, but the report includes all the columns for both sets of runs.
View-only report links
Share a view-only link to a report that is in a private project or team project.
View-only report links add a secret access token to the URL, so anyone who opens the link can view the page. Anyone can use the magic link to view the report without logging in first. For customers on W&B Local private cloud installations, these links remain behind your firewall, so only members of your team with access to your private instance and access to the view-only link can view the report.
In view-only mode, someone who is not logged in can see the charts and mouse over to see tooltips of values, zoom in and out on charts, and scroll through columns in the table. When in view mode, they cannot create new charts or new table queries to explore the data. View-only visitors to the report link won’t be able to click a run to get to the run page. Also, the view-only visitors would not be able to see the share modal but instead would see a tooltip on hover which says: Sharing not available for view only access.
The magic links are only available for “Private” and “Team” projects. For “Public” (anyone can view) or “Open” (anyone can view and contribute runs) projects, the links can’t turn on/off because this project is public implying that it is already available to anyone with the link.
Send a graph to a report
Send a graph from your workspace to a report to keep track of your progress. Click the dropdown menu on the chart or panel you’d like to copy to a report and click Add to report to select the destination report.
4.3.7 - Example reports
Reports gallery
Notes: Add a visualization with a quick summary
Capture an important observation, an idea for future work, or a milestone reached in the development of a project. All experiment runs in your report will link to their parameters, metrics, logs, and code, so you can save the full context of your work.
Jot down some text and pull in relevant charts to illustrate your insight.
Save the best examples from a complex code base for easy reference and future interaction. See the LIDAR point clouds W&B Report for an example of how to visualize LIDAR point clouds from the Lyft dataset and annotate with 3D bounding boxes.
Collaboration: Share findings with your colleagues
Explain how to get started with a project, share what you’ve observed so far, and synthesize the latest findings. Your colleagues can make suggestions or discuss details using comments on any panel or at the end of the report.
Include dynamic settings so that your colleagues can explore for themselves, get additional insights, and better plan their next steps. In this example, three types of experiments can be visualized independently, compared, or averaged.
See the SafeLife benchmark experiments W&B Report for an example of how to share first runs and observations of a benchmark.
Work log: Track what you’ve tried and plan next steps
Write down your thoughts on experiments, your findings, and any gotchas and next steps as you work through a project, keeping everything organized in one place. This lets you “document” all the important pieces beyond your scripts. See the Who Is Them? Text Disambiguation With Transformers W&B Report for an example of how you can report your findings.
Tell the story of a project, which you and others can reference later to understand how and why a model was developed. See The View from the Driver’s Seat W&B Report for how you can report your findings.
See the Learning Dexterity End-to-End Using W&B Reports for an example of how W&B Reports were used to explore how the OpenAI Robotics team used W&B Reports to run massive machine learning projects.
5 - W&B Platform
W&B Platform is the foundational infrastructure, tooling and governance scaffolding which supports the W&B products like Core, Models and Weave.
W&B Platform is available in three different deployment options:
The following responsibility matrix outlines some of the key differences between the different options:
Deployment options
The following sections provide an overview of each deployment type.
W&B Multi-tenant Cloud
W&B Multi-tenant Cloud is a fully managed service deployed in W&B’s cloud infrastructure, where you can seamlessly access the W&B products at the desired scale, with cost-efficient options for pricing, and with continuous updates for the latest features and functionalities. W&B recommends to use the Multi-tenant Cloud for your product trial, or to manage your production AI workflows if you do not need the security of a private deployment, self-service onboarding is important, and cost efficiency is critical.
W&B Dedicated Cloud is a single-tenant, fully managed service deployed in W&B’s cloud infrastructure. It is the best place to onboard W&B if your organization requires conformance to strict governance controls including data residency, have need of advanced security capabilities, and are looking to optimize their AI operating costs by not having to build & manage the required infrastructure with security, scale & performance characteristics.
With this option, you can deploy and manage W&B Server on your own managed infrastructure. W&B Server is a self-contained packaged mechanism to run the W&B Platform & its supported W&B products. W&B recommends this option if all your existing infrastructure is on-prem, or your organization has strict regulatory needs that are not satisfied by W&B Dedicated Cloud. With this option, you are fully responsible to manage the provisioning, and continuous maintenance & upgrades of the infrastructure required to support W&B Server.
If you’re looking to try any of the W&B products, W&B recommends using the Multi-tenant Cloud. If you’re looking for an enterprise-friendly setup, choose the appropriate deployment type for your trial here.
5.1 - Deployment options
5.1.1 - Use W&B Multi-tenant SaaS
W&B Multi-tenant Cloud is a fully managed platform deployed in W&B’s Google Cloud Platform (GCP) account in GPC’s North America regions. W&B Multi-tenant Cloud utilizes autoscaling in GCP to ensure that the platform scales appropriately based on increases or decreases in traffic.
Data security
For non enterprise plan users, all data is only stored in the shared cloud storage and is processed with shared cloud compute services. Depending on your pricing plan, you may be subject to storage limits.
Enterprise plan users can bring their own bucket (BYOB) using the secure storage connector at the team level to store their files such as models, datasets, and more. You can configure a single bucket for multiple teams or you can use separate buckets for different W&B Teams. If you do not configure secure storage connector for a team, that data is stored in the shared cloud storage.
Identity and access management (IAM)
If you are on enterprise plan, you can use the identity and access managements capabilities for secure authentication and effective authorization in your W&B Organization. The following features are available for IAM in Multi-tenant Cloud:
SSO authentication with OIDC or SAML. Reach out to your W&B team or support if you would like to configure SSO for your organization.
Define the scope of a W&B project to limit who can view, edit, and submit W&B runs to it with restricted projects.
Monitor
Organization admins can manage usage and billing for their account from the Billing tab in their account view. If using the shared cloud storage on Multi-tenant Cloud, an admin can optimize storage usage across different teams in their organization.
Maintenance
W&B Multi-tenant Cloud is a multi-tenant, fully managed platform. Since W&B Multi-tenant Cloud is managed by W&B, you do not incur the overhead and costs of provisioning and maintaining the W&B platform.
Compliance
Security controls for Multi-tenant Cloud are periodically audited internally and externally. Refer to the W&B Security Portal to request the SOC2 report and other security and compliance documents.
W&B Dedicated Cloud is a single-tenant, fully managed platform deployed in W&B’s AWS, GCP or Azure cloud accounts. Each Dedicated Cloud instance has its own isolated network, compute and storage from other W&B Dedicated Cloud instances. Your W&B specific metadata and data is stored in an isolated cloud storage and is processed using isolated cloud compute services.
Similar to W&B Multi-tenant Cloud, you can configure a single bucket for multiple teams or you can use separate buckets for different teams. If you do not configure secure storage connector for a team, that data is stored in the instance level bucket.
In addition to BYOB with secure storage connector, you can utilize IP allowlisting to restrict access to your Dedicated Cloud instance from only trusted network locations.
Use the identity and access management capabilities for secure authentication and effective authorization in your W&B Organization. The following features are available for IAM in Dedicated Cloud instances:
Use Audit logs to track user activity within your teams and to conform to your enterprise governance requirements. Also, you can view organization usage in our Dedicated Cloud instance with W&B Organization Dashboard.
Maintenance
Similar to W&B Multi-tenant Cloud, you do not incur the overhead and costs of provisioning and maintaining the W&B platform with Dedicated Cloud.
To understand how W&B manages updates on Dedicated Cloud, refer to the server release process.
Compliance
Security controls for W&B Dedicated Cloud are periodically audited internally and externally. Refer to the W&B Security Portal to request the security and compliance documents for your product assessment exercise.
Submit this form if you are interested in using Dedicated Cloud.
5.1.2.1 - Supported Dedicated Cloud regions
AWS, GCP, and Azure support cloud computing services in multiple locations worldwide. Global regions help ensure that you satisfy requirements related to data residency & compliance, latency, cost efficiency and more. W&B supports many of the available global regions for Dedicated Cloud.
Reach out to W&B Support if your preferred AWS, GCP, or Azure Region is not listed. W&B can validate if the relevant region has all the services that Dedicated Cloud needs and prioritize support depending on the outcome of the evaluation.
Supported AWS Regions
The following table lists AWS Regions that W&B currently supports for Dedicated Cloud instances.
The following table lists GCP Regions that W&B currently supports for Dedicated Cloud instances.
Region location
Region name
South Carolina
us-east1
N. Virginia
us-east4
Iowa
us-central1
Oregon
us-west1
Los Angeles
us-west2
Las Vegas
us-west4
Toronto
northamerica-northeast2
Belgium
europe-west1
London
europe-west2
Frankfurt
europe-west3
Netherlands
europe-west4
Sydney
australia-southeast1
Tokyo
asia-northeast1
Seoul
asia-northeast3
For more information about GCP Regions, see Regions and zones in the GCP Documentation.
Supported Azure Region
The following table lists Azure regions that W&B currently supports for Dedicated Cloud instances.
Region location
Region name
Virginia
eastus
Iowa
centralus
Washington
westus2
California
westus
Canada Central
canadacentral
France Central
francecentral
Netherlands
westeurope
Tokyo, Saitama
japaneast
Seoul
koreacentral
For more information about Azure regions, see Azure geographies in the Azure Documentation.
5.1.2.2 - Export data from Dedicated cloud
Export data from Dedicated cloud
If you would like to export all the data managed in your Dedicated cloud instance, you can use the W&B SDK API to extract the runs, metrics, artifacts, and more with the Import and Export API. The following table has covers some of the key exporting use cases.
If you manage artifacts stored in the Dedicated cloud with Secure Storage Connector, you may not need to export the artifacts using the W&B SDK API.
Using W&B SDK API to export all of your data can be slow if you have a large number of runs, artifacts etc. W&B recommends running the export process in appropriately sized batches so as not to overwhelm your Dedicated cloud instance.
5.1.3 - Self-managed
Deploying W&B in production
Use self-managed cloud or on-prem infrastructure
W&B recommends fully managed deployment options such as W&B Multi-tenant Cloud or W&B Dedicated Cloud deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required.
Your IT/DevOps/MLOps team is responsible for provisioning your deployment, managing upgrades, and continuously maintaining your self managed W&B Server instance.
Deploy W&B Server within self managed cloud accounts
W&B recommends that you use official W&B Terraform scripts to deploy W&B Server into your AWS, GCP, or Azure cloud account.
See specific cloud provider documentation for more information on how to set up W&B Server in AWS, GCP or Azure.
Deploy W&B Server in on-prem infrastructure
You need to configure several infrastructure components in order to set up W&B Server in your on-prem infrastructure. Some of those components include include, but are not limited to:
(Strongly recommended) Kubernetes cluster
MySQL 8 database cluster
Amazon S3-compatible object storage
Redis cache cluster
See Install on on-prem infrastructure for more information on how to install W&B Server on your on-prem infrastructure. W&B can provide recommendations for the different components and provide guidance through the installation process.
Deploy W&B Server on a custom cloud platform
You can deploy W&B Server to a cloud platform that is not AWS, GCP, or Azure. Requirements for that are similar to that for deploying in on-prem infrastructure.
Obtain your W&B Server license
You need a W&B trial license to complete your configuration of the W&B server. Open the Deploy Manager to generate a free trial license.
If you do not already have a W&B account, create one to generate your free license.
If you need an enterprise license for W&B Server which includes support for important security & other enterprise-friendly capabilities, submit this form or reach out to your W&B team.
The URL redirects you to a Get a License for W&B Local form. Provide the following information:
Choose a deployment type from the Choose Platform step.
Select the owner of the license or add a new organization in the Basic Information step.
Provide a name for the instance in the Name of Instance field and optionally provide a description in the Description field in the Get a License step.
Select the Generate License Key button.
A page displays with an overview of your deployment along with the license associated with the instance.
5.1.3.1 - Reference Architecture
W&B Reference Architecture
This page describes a reference architecture for a Weights & Biases deployment and outlines the recommended infrastructure and resources to support a production deployment of the platform.
Depending on your chosen deployment environment for Weights & Biases (W&B), various services can help to enhance the resiliency of your deployment.
For instance, major cloud providers offer robust managed database services which help to reduce the complexity of database configuration, maintenance, high availability, and resilience.
This reference architecture addresses some common deployment scenarios and shows how you can integrate your W&B deployment with cloud vendor services for optimal performance and reliability.
Before you start
Running any application in production comes with its own set of challenges, and W&B is no exception. While we aim to streamline the process, certain complexities may arise depending on your unique architecture and design decisions. Typically, managing a production deployment involves overseeing various components, including hardware, operating systems, networking, storage, security, the W&B platform itself, and other dependencies. This responsibility extends to both the initial setup of the environment and its ongoing maintenance.
Consider carefully whether a self-managed approach with W&B is suitable for your team and specific requirements.
A strong understanding of how to run and maintain production-grade application is an important prerequisite before you deploy self-managed W&B. If your team needs assistance, our Professional Services team and partners offer support for implementation and optimization.
The application layer consists of a multi-node Kubernetes cluster, with resilience against node failures. The Kubernetes cluster runs and maintains W&B’s pods.
Storage layer
The storage layer consists of a MySQL database and object storage. The MySQL database stores metadata and the object storage stores artifacts such as models and datasets.
Infrastructure requirements
Kubernetes
The W&B Server application is deployed as a Kubernetes Operator that deploys multiple Pods. For this reason, W&B requires a Kubernetes cluster with:
A fully configured and functioning Ingress controller
The capability to provision Persistent Volumes.
MySQL
W&B stores metadata in a MySQL database. The database’s performance and storage requirements depend on the shapes of the model parameters and related metadata. For example, the database grows in size as you track more training runs, and load on the database increases based on queries in run tables, user workspaces, and reports.
Consider the following when you deploy a self-managed MySQL database:
Backups. You should periodically back up the database to a separate facility. W&B recommends daily backups with at least 1 week of retention.
Performance. The disk the server is running on should be fast. W&B recommends running the database on an SSD or accelerated NAS.
Monitoring. The database should be monitored for load. If CPU usage is sustained at > 40% of the system for more than 5 minutes it is likely a good indication the server is resource starved.
Availability. Depending on your availability and durability requirements you might want to configure a hot standby on a separate machine that streams all updates in realtime from the primary server and can be used to failover to in the event that the primary server crashes or become corrupted.
Object storage
W&B requires object storage with Pre-signed URL and CORS support, deployed in Amazon S3, Azure Cloud Storage, Google Cloud Storage, or a storage service compatible with Amazon S3.service)
Versions
Kubernetes: at least version 1.29.
MySQL: at least 8.0.
Networking
In a deployment connected a public or private network, egress to the following endpoints is required during installation and during runtime:
* https://deploy.wandb.ai
* https://charts.wandb.ai
* https://docker.io
* https://quay.io
* https://gcr.io
Access to W&B and to the object storage is required for the training infrastructure and for each system that tracks the needs of experiments.
DNS
The fully qualified domain name (FQDN) of the W&B deployment must resolve to the IP address of the ingress/load balancer using an A record.
SSL/TLS
W&B requires a valid signed SSL/TLS certificate for secure communication between clients and the server. SSL/TLS termination must occur on the ingress/load balancer. The W&B Server application does not terminate SSL or TLS connections.
Please note: W&B does not recommend the use self-signed certificates and custom CAs.
Supported CPU architectures
W&B runs on the Intel (x86) CPU architecture. ARM is not supported.
Infrastructure provisioning
Terraform is the recommended way to deploy W&B for production. Using Terraform, you define the required resources, their references to other resources, and their dependencies. W&B provides Terraform modules for the major cloud providers. For details, refer to Deploy W&B Server within self managed cloud accounts.
Sizing
Use the following general guidelines as a starting point when planning a deployment. W&B recommends that you monitor all components of a new deployment closely and that you make adjustments based on observed usage patterns. Continue to monitor production deployments over time and make adjustments as needed to maintain optimal performance.
Models only
Kubernetes
Environment
CPU
Memory
Disk
Test/Dev
2 cores
16 GB
100 GB
Production
8 cores
64 GB
100 GB
Numbers are per Kubernetes worker node.
MySQL
Environment
CPU
Memory
Disk
Test/Dev
2 cores
16 GB
100 GB
Production
8 cores
64 GB
500 GB
Numbers are per MySQL node.
Weave only
Kubernetes
Environment
CPU
Memory
Disk
Test/Dev
4 cores
32 GB
100 GB
Production
12 cores
96 GB
100 GB
Numbers are per Kubernetes worker node.
MySQL
Environment
CPU
Memory
Disk
Test/Dev
2 cores
16 GB
100 GB
Production
8 cores
64 GB
500 GB
Numbers are per MySQL node.
Models and Weave
Kubernetes
Environment
CPU
Memory
Disk
Test/Dev
4 cores
32 GB
100 GB
Production
16 cores
128 GB
100 GB
Numbers are per Kubernetes worker node.
MySQL
Environment
CPU
Memory
Disk
Test/Dev
2 cores
16 GB
100 GB
Production
8 cores
64 GB
500 GB
Numbers are per MySQL node.
Cloud provider instance recommendations
Services
Cloud
Kubernetes
MySQL
Object Storage
AWS
EKS
RDS Aurora
S3
GCP
GKE
Google Cloud SQL - Mysql
Google Cloud Storage (GCS)
Azure
AKS
Azure Database for Mysql
Azure Blob Storage
Machine types
These recommendations apply to each node of a self-managed deployment of W&B in cloud infrastructure.
AWS
Environment
K8s (Models only)
K8s (Weave only)
K8s (Models&Weave)
MySQL
Test/Dev
r6i.large
r6i.xlarge
r6i.xlarge
db.r6g.large
Production
r6i.2xlarge
r6i.4xlarge
r6i.4xlarge
db.r6g.2xlarge
GCP
Environment
K8s (Models only)
K8s (Weave only)
K8s (Models&Weave)
MySQL
Test/Dev
n2-highmem-2
n2-highmem-4
n2-highmem-4
db-n1-highmem-2
Production
n2-highmem-8
n2-highmem-16
n2-highmem-16
db-n1-highmem-8
Azure
Environment
K8s (Models only)
K8s (Weave only)
K8s (Models&Weave)
MySQL
Test/Dev
Standard_E2_v5
Standard_E4_v5
Standard_E4_v5
MO_Standard_E2ds_v4
Production
Standard_E8_v5
Standard_E16_v5
Standard_E16_v5
MO_Standard_E8ds_v4
5.1.3.2 - Run W&B Server on Kubernetes
Deploy W&B Platform with Kubernetes Operator
W&B Kubernetes Operator
Use the W&B Kubernetes Operator to simplify deploying, administering, troubleshooting, and scaling your W&B Server deployments on Kubernetes. You can think of the operator as a smart assistant for your W&B instance.
The W&B Server architecture and design continuously evolves to expand AI developer tooling capabilities, and to provide appropriate primitives for high performance, better scalability, and easier administration. That evolution applies to the compute services, relevant storage and the connectivity between them. To help facilitate continuous updates and improvements across deployment types, W&B users a Kubernetes operator.
W&B uses the operator to deploy and manage Dedicated cloud instances on AWS, GCP and Azure public clouds.
For more information about Kubernetes operators, see Operator pattern in the Kubernetes documentation.
Reasons for the architecture shift
Historically, the W&B application was deployed as a single deployment and pod within a Kubernetes Cluster or a single Docker container. W&B has, and continues to recommend, to externalize the Database and Object Store. Externalizing the Database and Object store decouples the application’s state.
As the application grew, the need to evolve from a monolithic container to a distributed system (microservices) was apparent. This change facilitates backend logic handling and seamlessly introduces built-in Kubernetes infrastructure capabilities. Distributed systems also supports deploying new services essential for additional features that W&B relies on.
Before 2024, any Kubernetes-related change required manually updating the terraform-kubernetes-wandb Terraform module. Updating the Terraform module ensures compatibility across cloud providers, configuring necessary Terraform variables, and executing a Terraform apply for each backend or Kubernetes-level change.
This process was not scalable since W&B Support had to assist each customer with upgrading their Terraform module.
The solution was to implement an operator that connects to a central deploy.wandb.ai server to request the latest specification changes for a given release channel and apply them. Updates are received as long as the license is valid. Helm is used as both the deployment mechanism for the W&B operator and the means for the operator to handle all configuration templating of the W&B Kubernetes stack, Helm-ception.
How it works
You can install the operator with helm or from the source. See charts/operator for detailed instructions.
The installation process creates a deployment called controller-manager and uses a custom resource definition named weightsandbiases.apps.wandb.com (shortName: wandb), that takes a single spec and applies it to the cluster:
The controller-manager installs charts/operator-wandb based on the spec of the custom resource, release channel, and a user defined config. The configuration specification hierarchy enables maximum configuration flexibility at the user end and enables W&B to release new images, configurations, features, and Helm updates automatically.
Configuration specifications follow a hierarchical model where higher-level specifications override lower-level ones. Here’s how it works:
Release Channel Values: This base level configuration sets default values and configurations based on the release channel set by W&B for the deployment.
User Input Values: Users can override the default settings provided by the Release Channel Spec through the System Console.
Custom Resource Values: The highest level of specification, which comes from the user. Any values specified here override both the User Input and Release Channel specifications. For a detailed description of the configuration options, see Configuration Reference.
This hierarchical model ensures that configurations are flexible and customizable to meet varying needs while maintaining a manageable and systematic approach to upgrades and changes.
Requirements to use the W&B Kubernetes Operator
Satisfy the following requirements to deploy W&B with the W&B Kubernetes operator:
This section describes different ways to deploy the W&B Kubernetes operator.
The W&B Operator is the default and recommended installation method for W&B Server
Choose one of the following:
If you have provisioned all required external services and want to deploy W&B onto Kubernetes with Helm CLI, continue here.
If you prefer managing infrastructure and the W&B Server with Terraform, continue here.
If you want to utilize the W&B Cloud Terraform Modules, continue here.
Deploy W&B with Helm CLI
W&B provides a Helm Chart to deploy the W&B Kubernetes operator to a Kubernetes cluster. This approach allows you to deploy W&B Server with Helm CLI or a continuous delivery tool like ArgoCD. Make sure that the above mentioned requirements are in place.
Follow those steps to install the W&B Kubernetes Operator with Helm CLI:
Add the W&B Helm repository. The W&B Helm chart is available in the W&B Helm repository. Add the repo with the following commands:
Configure the W&B operator custom resource to trigger the W&B Server installation. Create an operator.yaml file to customize the W&B Operator deployment, specifying your custom configuration. See Configuration Reference for details.
Once you have the specification YAML created and filled with your values, run the following and the operator applies the configuration and install the W&B Server application based on your configuration.
kubectl apply -f operator.yaml
Wait until the deployment completes. This takes a few minutes.
To verify the installation using the web UI, create the first admin user account, then follow the verification steps outlined in Verify the installation.
Deploy W&B with Helm Terraform Module
This method allows for customized deployments tailored to specific requirements, leveraging Terraform’s infrastructure-as-code approach for consistency and repeatability. The official W&B Helm-based Terraform Module is located here.
The following code can be used as a starting point and includes all necessary configuration options for a production grade deployment.
Note that the configuration options are the same as described in Configuration Reference, but that the syntax has to follow the HashiCorp Configuration Language (HCL). The Terraform module creates the W&B custom resource definition (CRD).
To see how W&B&Biases themselves use the Helm Terraform module to deploy “Dedicated cloud” installations for customers, follow those links:
W&B provides a set of Terraform Modules for AWS, GCP and Azure. Those modules deploy entire infrastructures including Kubernetes clusters, load balancers, MySQL databases and so on as well as the W&B Server application. The W&B Kubernetes Operator is already pre-baked with those official W&B cloud-specific Terraform Modules with the following versions:
This integration ensures that W&B Kubernetes Operator is ready to use for your instance with minimal setup, providing a streamlined path to deploying and managing W&B Server in your cloud environment.
For a detailed description on how to use these modules, refer to this section to self-managed installations section in the docs.
Verify the installation
To verify the installation, W&B recommends using the W&B CLI. The verify command executes several tests that verify all components and configurations.
This step assumes that the first admin user account is created with the browser.
Follow these steps to verify the installation:
Install the W&B CLI:
pip install wandb
Log in to W&B:
wandb login --host=https://YOUR_DNS_DOMAIN
For example:
wandb login --host=https://wandb.company-name.com
Verify the installation:
wandb verify
A successful installation and fully working W&B deployment shows the following output:
Default host selected: https://wandb.company-name.com
Find detailed logs for this test at: /var/folders/pn/b3g3gnc11_sbsykqkm3tx5rh0000gp/T/tmpdtdjbxua/wandb
Checking if logged in...................................................✅
Checking signed URL upload..............................................✅
Checking ability to send large payloads through proxy...................✅
Checking requests to base url...........................................✅
Checking requests made over signed URLs.................................✅
Checking CORs configuration of the bucket...............................✅
Checking wandb package version is up to date............................✅
Checking logged metrics, saving and downloading a file..................✅
Checking artifact save and download workflows...........................✅
Access the W&B Management Console
The W&B Kubernetes operator comes with a management console. It is located at ${HOST_URI}/console, for example https://wandb.company-name.com/ console.
There are two ways to log in to the management console:
Open the W&B application in the browser and login. Log in to the W&B application with ${HOST_URI}/, for example https://wandb.company-name.com/
Access the console. Click on the icon in the top right corner and then click System console. Only users with admin privileges can see the System console entry.
W&B recommends you access the console using the following steps only if Option 1 does not work.
Open console application in browser. Open the above described URL, which redirects you to the login screen:
Retrieve the password from the Kubernetes secret that the installation generates:
kubectl get secret wandb-password -o jsonpath='{.data.password}' | base64 -d
Copy the password.
Login to the console. Paste the copied password, then click Login.
Update the W&B Kubernetes operator
This section describes how to update the W&B Kubernetes operator.
Updating the W&B Kubernetes operator does not update the W&B server application.
See the instructions here if you use a Helm chart that does not user the W&B Kubernetes operator before you follow the proceeding instructions to update the W&B operator.
Copy and paste the code snippets below into your terminal.
You no longer need to update W&B Server application if you use the W&B Kubernetes operator.
The operator automatically updates your W&B Server application when a new version of the software of W&B is released.
Migrate self-managed instances to W&B Operator
The proceeding section describe how to migrate from self-managing your own W&B Server installation to using the W&B Operator to do this for you. The migration process depends on how you installed W&B Server:
The W&B Operator is the default and recommended installation method for W&B Server. Reach out to Customer Support or your W&B team if you have any questions.
If you used the official W&B Cloud Terraform Modules, navigate to the appropriate documentation and follow the steps there:
Configure the new helm chart and trigger W&B application deployment. Apply the new configuration.
kubectl apply -f operator.yaml
The deployment takes a few minutes to complete.
Verify the installation. Make sure that everything works by following the steps in Verify the installation.
Remove to old installation. Uninstall the old helm chart or delete the resources that were created with manifests.
Migrate to Operator-based Terraform Helm chart
Follow these steps to migrate to the Operator-based Helm chart:
Prepare Terraform config. Replace the Terraform code from the old deployment in your Terraform config with the one that is described here. Set the same variables as before. Do not change .tfvars file if you have one.
Execute Terraform run. Execute terraform init, plan and apply
Verify the installation. Make sure that everything works by following the steps in Verify the installation.
Remove to old installation. Uninstall the old helm chart or delete the resources that were created with manifests.
Configuration Reference for W&B Server
This section describes the configuration options for W&B Server application. The application receives its configuration as custom resource definition named WeightsAndBiases. Some configuration options are exposed with the below configuration, some need to be set as environment variables.
The documentation has two lists of environment variables: basic and advanced. Only use environment variables if the configuration option that you need are not exposed using Helm Chart.
The W&B Server application configuration file for a production deployment requires the following contents. This YAML file defines the desired state of your W&B deployment, including the version, environment variables, external resources like databases, and other necessary settings.
The variable contains a connection string in this form:
s3://$ACCESS_KEY:$SECRET_KEY@$HOST/$BUCKET_NAME
You can optionally tell W&B to only connect over TLS if you configure a trusted SSL certificate for your object store. To do so, add the tls query parameter to the url:
This works only for a trusted SSL certificate. W&B does not support self-signed certificates.
MySQL
global:
mysql:
# Example values, replace with your owndatabase: wandb_localhost: 10.218.0.2name: wandb_localpassword: 8wtX6cJH...ZcUarK4zZGjpVport: 3306user: wandb
License
global:
# Example license, replace with your ownlicense: eyJhbGnUzaHgyQjQy...VFnPS_KETXg1hi
Ingress
To identify the ingress class, see this FAQ entry.
Without TLS
global:
# IMPORTANT: Ingress is on the same level in the YAML as ‘global’ (not a child)ingress:
class: ""
global:
# IMPORTANT: Ingress is on the same level in the YAML as ‘global’ (not a child)ingress:
class: ""annotations:
{}
# kubernetes.io/ingress.class: nginx# kubernetes.io/tls-acme: "true"tls:
- secretName: wandb-ingress-tlshosts:
- <HOST_URI>
In case of Nginx you might have to add the following annotation:
global:
ldap:
enabled: true# LDAP server address including "ldap://" or "ldaps://"host:
# LDAP search base to use for finding usersbaseDN:
# LDAP user to bind with (if not using anonymous bind)bindDN:
# Secret name and key with LDAP password to bind with (if not using anonymous bind)bindPW:
# LDAP attribute for email and group ID attribute names as comma separated string values.attributes:
# LDAP group allow listgroupAllowList:
# Enable LDAP TLStls: false
With TLS
The LDAP TLS cert configuration requires a config map pre-created with the certificate content.
To create the config map you can use the following command:
And use the config map in the YAML like the example below
global:
ldap:
enabled: true# LDAP server address including "ldap://" or "ldaps://"host:
# LDAP search base to use for finding usersbaseDN:
# LDAP user to bind with (if not using anonymous bind)bindDN:
# Secret name and key with LDAP password to bind with (if not using anonymous bind)bindPW:
# LDAP attribute for email and group ID attribute names as comma separated string values.attributes:
# LDAP group allow listgroupAllowList:
# Enable LDAP TLStls: true# ConfigMap name and key with CA certificate for LDAP servertlsCert:
configMap:
name: "ldap-tls-cert"key: "certificate.crt"
This section describes configuration options for W&B Kubernetes operator (wandb-controller-manager). The operator receives its configuration in the form of a YAML file.
By default, the W&B Kubernetes operator does not need a configuration file. Create a configuration file if required. For example, you might need a configuration file to specify custom certificate authorities, deploy in an air gap environment and so forth.
A custom certificate authority (customCACerts), is a list and can take many certificates. Those certificate authorities when added only apply to the W&B Kubernetes operator (wandb-controller-manager).
You can get the ingress class installed in your cluster by running
kubectl get ingressclass
5.1.3.2.1 - Kubernetes operator for air-gapped instances
Deploy W&B Platform with Kubernetes Operator (Airgapped)
Introduction
This guide provides step-by-step instructions to deploy the W&B Platform in air-gapped customer-managed environments.
Use an internal repository or registry to host the Helm charts and container images. Run all commands in a shell console with proper access to the Kubernetes cluster.
You could utilize similar commands in any continuous delivery tooling that you use to deploy Kubernetes applications.
Step 1: Prerequisites
Before starting, make sure your environment meets the following requirements:
Kubernetes version >= 1.28
Helm version >= 3
Access to an internal container registry with the required W&B images
Access to an internal Helm repository for W&B Helm charts
Step 2: Prepare internal container registry
Before proceeding with the deployment, you must ensure that the following container images are available in your internal container registry.
These images are critical for the successful deployment of W&B components.
The operator chart is used to deploy the W&B Operator, or the Controller Manager. While the platform chart is used to deploy the W&B Platform using the values configured in the custom resource definition (CRD).
Step 4: Set up Helm repository
Now, configure the Helm repository to pull the W&B Helm charts from your internal repository. Run the following commands to add and update the Helm repository:
The W&B Kubernetes operator, also known as the controller manager, is responsible for managing the W&B platform components. To install it in an air-gapped environment,
you must configure it to use your internal container registry.
To do so, you must override the default image settings to use your internal container registry and set the key airgapped: true to indicate the expected deployment type. Update the values.yaml file as shown below:
After installing the W&B Kubernetes operator, you must configure the Custom Resource Definitions (CRDs) to point to your internal Helm repository and container registry.
This configuration ensures that the Kubernetes operators uses your internal registry and repository are when it deploys the required components of the W&B platform.
To deploy the W&B platform, the Kubernetes Operator uses the operator-wandb chart from your internal repository and use the values from your CRD to configure the Helm chart.
Finally, after setting up the Kubernetes operator and the CRD, deploy the W&B platform using the following command:
kubectl apply -f wandb.yaml
FAQ
Refer to the below frequently asked questions (FAQs) and troubleshooting tips during the deployment process:
There is another ingress class. Can that class be used?
Yes, you can configure your ingress class by modifying the ingress settings in values.yaml.
The certificate bundle has more than one certificate. Would that work?
You must split the certificates into multiple entries in the customCACerts section of values.yaml.
How do you prevent the Kubernetes operator from applying unattended updates. Is that possible?
You can turn off auto-updates from the W&B console. Reach out to your W&B team for any questions on the supported versions. Also, note that W&B supports platform versions released in last 6 months. W&B recommends performing periodic upgrades.
Does the deployment work if the environment has no connection to public repositories?
As long as you have enabled the airgapped: true configuration, the Kubernetes operator does not attempt to reach public repositories. The Kubernetes operator attempts to use your internal resources.
5.1.3.3 - Install on public cloud
5.1.3.3.1 - Deploy W&B Platform on AWS
Hosting W&B Server on AWS.
W&B recommends fully managed deployment options such as W&B Multi-tenant Cloud or W&B Dedicated Cloud deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required.
Before you start, W&B recommends that you choose one of the remote backends available for Terraform to store the State File.
The State File is the necessary resource to roll out upgrades or make changes in your deployment without recreating all components.
The Terraform Module deploys the following mandatory components:
Load Balancer
AWS Identity & Access Management (IAM)
AWS Key Management System (KMS)
Amazon Aurora MySQL
Amazon VPC
Amazon S3
Amazon Route53
Amazon Certificate Manager (ACM)
Amazon Elastic Load Balancing (ALB)
Amazon Secrets Manager
Other deployment options can also include the following optional components:
Elastic Cache for Redis
SQS
Pre-requisite permissions
The account that runs Terraform needs to be able to create all components described in the Introduction and permission to create IAM Policies and IAM Roles and assign roles to resources.
General steps
The steps on this topic are common for any deployment option covered by this documentation.
Ensure to define variables in your tvfars file before you deploy because the namespace variable is a string that prefixes all resources created by Terraform.
The combination of subdomain and domain will form the FQDN that W&B will be configured. In the example above, the W&B FQDN will be wandb-aws.wandb.ml and the DNS zone_id where the FQDN record will be created.
Both allowed_inbound_cidr and allowed_inbound_ipv6_cidr also require setting. In the module, this is a mandatory input. The proceeding example permits access from any source to the W&B installation.
Create the file versions.tf
This file will contain the Terraform and Terraform provider versions required to deploy W&B in AWS
provider "aws"{ region ="eu-central-1" default_tags { tags ={ GithubRepo ="terraform-aws-wandb" GithubOrg ="wandb" Enviroment ="Example" Example ="PublicDnsExternal"}}}
Optionally, but highly recommended, add the remote backend configuration mentioned at the beginning of this documentation.
Create the file variables.tf
For every option configured in the terraform.tfvars Terraform requires a correspondent variable declaration.
variable "namespace" {
type = string
description = "Name prefix used for resources"
}
variable "domain_name" {
type = string
description = "Domain name used to access instance."
}
variable "subdomain" {
type = string
default = null
description = "Subdomain for accessing the Weights & Biases UI."
}
variable "license" {
type = string
}
variable "zone_id" {
type = string
description = "Domain for creating the Weights & Biases subdomain on."
}
variable "allowed_inbound_cidr" {
description = "CIDRs allowed to access wandb-server."
nullable = false
type = list(string)
}
variable "allowed_inbound_ipv6_cidr" {
description = "CIDRs allowed to access wandb-server."
nullable = false
type = list(string)
}
Recommended deployment option
This is the most straightforward deployment option configuration that creates all Mandatory components and installs in the Kubernetes Cluster the latest version of W&B.
Create the main.tf
In the same directory where you created the files in the General Steps, create a file main.tf with the following content:
module "wandb_infra" {
source = "wandb/wandb/aws"
version = "~>2.0"
namespace = var.namespace
domain_name = var.domain_name
subdomain = var.subdomain
zone_id = var.zone_id
allowed_inbound_cidr = var.allowed_inbound_cidr
allowed_inbound_ipv6_cidr = var.allowed_inbound_ipv6_cidr
public_access = true
external_dns = true
kubernetes_public_access = true
kubernetes_public_access_cidrs = ["0.0.0.0/0"]
}
data "aws_eks_cluster" "app_cluster" {
name = module.wandb_infra.cluster_id
}
data "aws_eks_cluster_auth" "app_cluster" {
name = module.wandb_infra.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.app_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.app_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.app_cluster.token
}
module "wandb_app" {
source = "wandb/wandb/kubernetes"
version = "~>1.0"
license = var.license
host = module.wandb_infra.url
bucket = "s3://${module.wandb_infra.bucket_name}"
bucket_aws_region = module.wandb_infra.bucket_region
bucket_queue = "internal://"
database_connection_string = "mysql://${module.wandb_infra.database_connection_string}"
# TF attempts to deploy while the work group is
# still spinning up if you do not wait
depends_on = [module.wandb_infra]
}
output "bucket_name" {
value = module.wandb_infra.bucket_name
}
output "url" {
value = module.wandb_infra.url
}
Another deployment option uses Redis to cache the SQL queries and speed up the application response when loading the metrics for the experiments.
You need to add the option create_elasticache_subnet = true to the same main.tf file described in the Recommended deployment section to enable the cache.
Deployment option 3 consists of enabling the external message broker. This is optional because the W&B brings embedded a broker. This option doesn’t bring a performance improvement.
The AWS resource that provides the message broker is the SQS, and to enable it, you will need to add the option use_internal_queue = false to the same main.tf described in the Recommended deployment section.
You can combine all three deployment options adding all configurations to the same file.
The Terraform Module provides several options that can be combined along with the standard options and the minimal configuration found in Deployment - Recommended
Manual configuration
To use an Amazon S3 bucket as a file storage backend for W&B, you will need to:
you’ll need to create a bucket, along with an SQS queue configured to receive object creation notifications from that bucket. Your instance will need permissions to read from this queue.
Create an S3 Bucket and Bucket Notifications
Follow the procedure bellow to create an Amazon S3 bucket and enable bucket notifications.
Navigate to Amazon S3 in the AWS Console.
Select Create bucket.
Within the Advanced settings, select Add notification within the Events section.
Configure all object creation events to be sent to the SQS Queue you configured earlier.
Enable CORS access. Your CORS configuration should look like the following:
Follow the procedure below to create an SQS Queue:
Navigate to Amazon SQS in the AWS Console.
Select Create queue.
From the Details section, select a Standard queue type.
Within the Access policy section, add permission to the following principals:
SendMessage
ReceiveMessage
ChangeMessageVisibility
DeleteMessage
GetQueueUrl
Optionally add an advanced access policy in the Access Policy section. For example, the policy for accessing Amazon SQS with a statement is as follows:
The node where W&B server is running must be configured to permit access to Amazon S3 and Amazon SQS. Depending on the type of server deployment you have opted for, you may need to add the following policy statements to your node role:
Navigate to the W&B settings page at http(s)://YOUR-W&B-SERVER-HOST/system-admin.
Enable the **Use an external file storage backend option
Provide information about your Amazon S3 bucket, region, and Amazon SQS queue in the following format:
File Storage Bucket: s3://<bucket-name>
File Storage Region (AWS only): <region>
Notification Subscription: sqs://<queue-name>
Select Update settings to apply the new settings.
Upgrade your W&B version
Follow the steps outlined here to update W&B:
Add wandb_version to your configuration in your wandb_app module. Provide the version of W&B you want to upgrade to. For example, the following line specifies W&B version 0.48.1:
Alternatively, you can add the wandb_version to the terraform.tfvars and create a variable with the same name and instead of using the literal value, use the var.wandb_version
This section details the steps required to upgrade from pre-operator to post-operator environments using the terraform-aws-wandb module.
The transition to a Kubernetes operator pattern is necessary for the W&B architecture. See this section for a detailed explanation for the architecture shift.
Before and after architecture
Previously, the W&B architecture used:
module"wandb_infra" {
source ="wandb/wandb/aws" version ="1.16.10" ...
}
to control the infrastructure:
and this module to deploy the W&B Server:
module"wandb_app" {
source ="wandb/wandb/kubernetes" version ="1.12.0"}
Post-transition, the architecture uses:
module"wandb_infra" {
source ="wandb/wandb/aws" version ="4.7.2" ...
}
to manage both the installation of infrastructure and the W&B Server to the Kubernetes cluster, thus eliminating the need for the module "wandb_app" in post-operator.tf.
This architectural shift enables additional features (like OpenTelemetry, Prometheus, HPAs, Kafka, and image updates) without requiring manual Terraform operations by SRE/Infrastructure teams.
To commence with a base installation of the W&B Pre-Operator, ensure that post-operator.tf has a .disabled file extension and pre-operator.tf is active (that does not have a .disabled extension). Those files can be found here.
Prerequisites
Before initiating the migration process, ensure the following prerequisites are met:
Egress: The deployment can’t be airgapped. It needs access to deploy.wandb.ai to get the latest spec for the Release Channel.
AWS Credentials: Proper AWS credentials configured to interact with your AWS resources.
Terraform Installed: The latest version of Terraform should be installed on your system.
Route53 Hosted Zone: An existing Route53 hosted zone corresponding to the domain under which the application will be served.
Pre-Operator Terraform Files: Ensure pre-operator.tf and associated variable files like pre-operator.tfvars are correctly set up.
Pre-Operator set up
Execute the following Terraform commands to initialize and apply the configuration for the Pre-Operator setup:
The pre-operator.tf configuration calls two modules:
module"wandb_infra" {
source ="wandb/wandb/aws" version ="1.16.10" ...
}
This module spins up the infrastructure.
module"wandb_app" {
source ="wandb/wandb/kubernetes" version ="1.12.0"}
This module deploys the application.
Post-Operator Setup
Make sure that pre-operator.tf has a .disabled extension, and post-operator.tf is active.
The post-operator.tfvars includes additional variables:
...# wandb_version = "0.51.2" is now managed via the Release Channel or set in the User Spec.# Required Operator Variables for Upgrade:size="small"enable_dummy_dns=trueenable_operator_alb=truecustom_domain_filter="sandbox-aws.wandb.ml"
Run the following commands to initialize and apply the Post-Operator configuration:
module"wandb_infra" {
source ="wandb/wandb/aws" version ="4.7.2" ...
}
Changes in the post-operator configuration:
Update Required Providers: Change required_providers.aws.version from 3.6 to 4.0 for provider compatibility.
DNS and Load Balancer Configuration: Integrate enable_dummy_dns and enable_operator_alb to manage DNS records and AWS Load Balancer setup through an Ingress.
License and Size Configuration: Transfer the license and size parameters directly to the wandb_infra module to match new operational requirements.
Custom Domain Handling: If necessary, use custom_domain_filter to troubleshoot DNS issues by checking the External DNS pod logs within the kube-system namespace.
Helm Provider Configuration: Enable and configure the Helm provider to manage Kubernetes resources effectively:
This comprehensive setup ensures a smooth transition from the Pre-Operator to the Post-Operator configuration, leveraging new efficiencies and capabilities enabled by the operator model.
5.1.3.3.2 - Deploy W&B Platform on GCP
Hosting W&B Server on GCP.
W&B recommends fully managed deployment options such as W&B Multi-tenant Cloud or W&B Dedicated Cloud deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required.
If you’ve determined to self-managed W&B Server, W&B recommends using the W&B Server GCP Terraform Module to deploy the platform on GCP.
The module documentation is extensive and contains all available options that can be used.
Before you start, W&B recommends that you choose one of the remote backends available for Terraform to store the State File.
The State File is the necessary resource to roll out upgrades or make changes in your deployment without recreating all components.
The Terraform Module will deploy the following mandatory components:
VPC
Cloud SQL for MySQL
Cloud Storage Bucket
Google Kubernetes Engine
KMS Crypto Key
Load Balancer
Other deployment options can also include the following optional components:
Memory store for Redis
Pub/Sub messages system
Pre-requisite permissions
The account that will run the terraform need to have the role roles/owner in the GCP project used.
General steps
The steps on this topic are common for any deployment option covered by this documentation.
The variables defined here need to be decided before the deployment because. The namespace variable will be a string that will prefix all resources created by Terraform.
The combination of subdomain and domain will form the FQDN that W&B will be configured. In the example above, the W&B FQDN will be wandb-gcp.wandb.ml
Create the file variables.tf
For every option configured in the terraform.tfvars Terraform requires a correspondent variable declaration.
variable "project_id" {
type = string
description = "Project ID"
}
variable "region" {
type = string
description = "Google region"
}
variable "zone" {
type = string
description = "Google zone"
}
variable "namespace" {
type = string
description = "Namespace prefix used for resources"
}
variable "domain_name" {
type = string
description = "Domain name for accessing the Weights & Biases UI."
}
variable "subdomain" {
type = string
description = "Subdomain for access the Weights & Biases UI."
}
variable "license" {
type = string
description = "W&B License"
}
Deployment - Recommended (~20 mins)
This is the most straightforward deployment option configuration that will create all Mandatory components and install in the Kubernetes Cluster the latest version of W&B.
Create the main.tf
In the same directory where you created the files in the General Steps, create a file main.tf with the following content:
provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}
provider "google-beta" {
project = var.project_id
region = var.region
zone = var.zone
}
data "google_client_config" "current" {}
provider "kubernetes" {
host = "https://${module.wandb.cluster_endpoint}"
cluster_ca_certificate = base64decode(module.wandb.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
# Spin up all required services
module "wandb" {
source = "wandb/wandb/google"
version = "~> 5.0"
namespace = var.namespace
license = var.license
domain_name = var.domain_name
subdomain = var.subdomain
}
# You'll want to update your DNS with the provisioned IP address
output "url" {
value = module.wandb.url
}
output "address" {
value = module.wandb.address
}
output "bucket_name" {
value = module.wandb.bucket_name
}
Deployment option 3 consists of enabling the external message broker. This is optional because the W&B brings embedded a broker. This option doesn’t bring a performance improvement.
The GCP resource that provides the message broker is the Pub/Sub, and to enable it, you will need to add the option use_internal_queue = false to the same main.tf specified in the recommended Deployment option section
You can combine all three deployment options adding all configurations to the same file.
The Terraform Module provides several options that can be combined along with the standard options and the minimal configuration found in Deployment - Recommended
Manual configuration
To use a GCP Storage bucket as a file storage backend for W&B, you will need to create a:
Your instance also needs the iam.serviceAccounts.signBlob permission in GCP to create signed file URLs. Add Service Account Token Creator role to the service account or IAM member that your instance is running as to enable permission.
Enable CORS access. This can only be done using the command line. First, create a JSON file with the following CORS configuration.
cors:
- maxAgeSeconds: 3600
method:
- GET
- PUT
origin:
- '<YOUR_W&B_SERVER_HOST>'
responseHeader:
- Content-Type
Note that the scheme, host, and port of the values for the origin must match exactly.
Make sure you have gcloud installed, and logged into the correct GCP Project.
Follow the procedure below in your command line to create a notification stream from the Storage Bucket to the Pub/Sub topic.
You must use the CLI to create a notification stream. Ensure you have gcloud installed.
Log into your GCP Project.
Run the following in your terminal:
gcloud pubsub topics list # list names of topics for referencegcloud storage ls # list names of buckets for reference# create bucket notificationgcloud storage buckets notifications create gs://<BUCKET_NAME> --topic=<TOPIC_NAME>
Finally, navigate to the W&B System Connections page at http(s)://YOUR-W&B-SERVER-HOST/console/settings/system.
Select the provider Google Cloud Storage (gcs),
Provide the name of the GCS bucket
Press Update settings to apply the new settings.
Upgrade W&B Server
Follow the steps outlined here to update W&B:
Add wandb_version to your configuration in your wandb_app module. Provide the version of W&B you want to upgrade to. For example, the following line specifies W&B version 0.48.1:
Alternatively, you can add the wandb_version to the terraform.tfvars and create a variable with the same name and instead of using the literal value, use the var.wandb_version
W&B recommends fully managed deployment options such as W&B Multi-tenant Cloud or W&B Dedicated Cloud deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required.
If you’ve determined to self-managed W&B Server, W&B recommends using the W&B Server Azure Terraform Module to deploy the platform on Azure.
The module documentation is extensive and contains all available options that can be used. We will cover some deployment options in this document.
Before you start, we recommend you choose one of the remote backends available for Terraform to store the State File.
The State File is the necessary resource to roll out upgrades or make changes in your deployment without recreating all components.
The Terraform Module will deploy the following mandatory components:
Azure Resource Group
Azure Virtual Network (VPC)
Azure MySQL Fliexible Server
Azure Storage Account & Blob Storage
Azure Kubernetes Service
Azure Application Gateway
Other deployment options can also include the following optional components:
Azure Cache for Redis
Azure Event Grid
Pre-requisite permissions
The simplest way to get the AzureRM provider configured is via Azure CLI but the incase of automation using Azure Service Principal can also be useful.
Regardless the authentication method used, the account that will run the Terraform needs to be able to create all components described in the Introduction.
General steps
The steps on this topic are common for any deployment option covered by this documentation.
We recommend creating a Git repository with the code that will be used, but you can keep your files locally.
Create the terraform.tfvars file The tvfars file content can be customized according to the installation type, but the minimum recommended will look like the example below.
The variables defined here need to be decided before the deployment because. The namespace variable will be a string that will prefix all resources created by Terraform.
The combination of subdomain and domain will form the FQDN that W&B will be configured. In the example above, the W&B FQDN will be wandb-aws.wandb.ml and the DNS zone_id where the FQDN record will be created.
Create the file versions.tf This file will contain the Terraform and Terraform provider versions required to deploy W&B in AWS
Optionally, but highly recommended, you can add the remote backend configuration mentioned at the beginning of this documentation.
Create the filevariables.tf. For every option configured in the terraform.tfvars Terraform requires a correspondent variable declaration.
variable "namespace"{ type = string
description ="String used for prefix resources."} variable "location"{ type = string
description ="Azure Resource Group location"} variable "domain_name"{ type = string
description ="Domain for accessing the Weights & Biases UI."} variable "subdomain"{ type = string
default = null
description ="Subdomain for accessing the Weights & Biases UI. Default creates record at Route53 Route."} variable "license"{ type = string
description ="Your wandb/local license"}
Recommended deployment
This is the most straightforward deployment option configuration that will create all Mandatory components and install in the Kubernetes Cluster the latest version of W&B.
Create the main.tf In the same directory where you created the files in the General Steps, create a file main.tf with the following content:
Deployment option 3 consists of enabling the external message broker. This is optional because the W&B brings embedded a broker. This option doesn’t bring a performance improvement.
The Azure resource that provides the message broker is the Azure Event Grid, and to enable it, you must add the option use_internal_queue = false to the same main.tf that you used in the recommended deployment
You can combine all three deployment options adding all configurations to the same file.
The Terraform Module provides several options that you can combine along with the standard options and the minimal configuration found in recommended deployment
5.1.3.4 - Deploy W&B Platform On-premises
Hosting W&B Server on on-premises infrastructure
W&B recommends fully managed deployment options such as W&B Multi-tenant Cloud or W&B Dedicated Cloud deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required.
Reach out to the W&B Sales Team for related question: contact@wandb.com.
Infrastructure guidelines
Before you start deploying W&B, refer to the reference architecture, especially the infrastructure requirements.
MySQL database
W&B does not recommend using MySQL 5.7. If you are using MySQL 5.7, migrate to MySQL 8 for best compatibility with latest versions of W&B Server. The W&B Server currently only supports MySQL 8 versions 8.0.28 and above.
There are a number of enterprise services that make operating a scalable MySQL database simpler. W&B recommends looking into one of the following solutions:
Due to some changes in the way that MySQL 8.0 handles sort_buffer_size, you might need to update the sort_buffer_size parameter from its default value of 262144. The recommendation is to set the value to 67108864 (64MiB) to ensure that MySQL works efficiently with W&B. MySQL supports this configuration starting with v8.0.28.
Database considerations
Create a database and a user with the following SQL query. Replace SOME_PASSWORD with password of your choice:
CREATEUSER'wandb_local'@'%' IDENTIFIED BY'SOME_PASSWORD';
CREATEDATABASE wandb_local CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
GRANTALLON wandb_local.*TO'wandb_local'@'%'WITHGRANTOPTION;
This works only if the SSL certificate is trusted. W&B does not support self-signed certificates.
Parameter group configuration
Ensure that the following parameter groups are set to tune the database performance:
The object store can be externally hosted on a Minio cluster, or any Amazon S3 compatible object store that has support for signed URLs. Run the following script to check if your object store supports signed URLs.
Additionally, the following CORS policy needs to be applied to the object store.
You can specify your credentials in a connection string when you connect to an Amazon S3 compatible object store. For example, you can specify the following:
s3://$ACCESS_KEY:$SECRET_KEY@$HOST/$BUCKET_NAME
You can optionally tell W&B to only connect over TLS if you configure a trusted SSL certificate for your object store. To do so, add the tls query parameter to the URL. For example, the following URL example demonstrates how to add the TLS query parameter to an Amazon S3 URI:
This works only if the SSL certificate is trusted. W&B does not support self-signed certificates.
Set BUCKET_QUEUE to internal:// if you use third-party object stores. This tells the W&B server to manage all object notifications internally instead of depending on an external SQS queue or equivalent.
The most important things to consider when running your own object store are:
Storage capacity and performance. It’s fine to use magnetic disks, but you should be monitoring the capacity of these disks. Average W&B usage results in 10’s to 100’s of Gigabytes. Heavy usage could result in Petabytes of storage consumption.
Fault tolerance. At a minimum, the physical disk storing the objects should be on a RAID array. If you use minio, consider running it in distributed mode.
Availability. Monitoring should be configured to ensure the storage is available.
There are many enterprise alternatives to running your own object storage service such as:
Ensure that all machines used to execute machine learning payloads, and the devices used to access the service through web browsers, can communicate to this endpoint.
SSL / TLS
W&B Server does not stop SSL. If your security policies require SSL communication within your trusted networks consider using a tool like Istio and side car containers. The load balancer itself should terminate SSL with a valid certificate. Using self-signed certificates is not supported and will cause a number of challenges for users. If possible using a service like Let’s Encrypt is a great way to provided trusted certificates to your load balancer. Services like Caddy and Cloudflare manage SSL for you.
Example nginx configuration
The following is an example configuration using nginx as a reverse proxy.
events {}
http {
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# Also, in the above case, force HTTPS
map $http_x_forwarded_proto $sts {
default'';
"https""max-age=31536000; includeSubDomains";
}
# If we receive X-Forwarded-Host, pass it though; otherwise, pass along $http_host
map $http_x_forwarded_host $proxy_x_forwarded_host {
default $http_x_forwarded_host;
'' $http_host;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
defaultupgrade;
''close;
}
server {
listen443ssl;
server_namewww.example.com;
ssl_certificatewww.example.com.crt;
ssl_certificate_keywww.example.com.key;
proxy_http_version1.1;
proxy_bufferingoff;
proxy_set_headerHost $http_host;
proxy_set_headerUpgrade $http_upgrade;
proxy_set_headerConnection $proxy_connection;
proxy_set_headerX-Real-IP $remote_addr;
proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_headerX-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_headerX-Forwarded-Host $proxy_x_forwarded_host;
location/ {
proxy_passhttp://$YOUR_UPSTREAM_SERVER_IP:8080/;
}
keepalive_timeout10;
}
}
Verify your installation
Very your W&B Server is configured properly. Run the following commands in your terminal:
Check log files to view any errors the W&B Server hits at startup. Run the following commands:
docker logs wandb-local
kubectl get pods
kubectl logs wandb-XXXXX-XXXXX
Contact W&B Support if you encounter errors.
5.1.3.5 - Update W&B license and version
Guide for updating W&B (Weights & Biases) version and license across different installation methods.
Update your W&B Server Version and License with the same method you installed W&B Server with. The following table lists how to update your license and version based on different deployment methods:
First, navigate to the W&B maintained Terraform module for your appropriate cloud provider. See the preceding table to find the appropriate Terraform module based on your cloud provider.
Within your Terraform configuration, update wandb_version and license in your Terraform wandb_app module configuration:
module"wandb_app" {
source ="wandb/wandb/<cloud-specific-module>" version ="new_version" license ="new_license_key" # Your new license key
wandb_version ="new_wandb_version" # Desired W&B version
...
}
Apply the Terraform configuration with terraform plan and terraform apply.
terraform init
terraform apply
(Optional) If you use a terraform.tfvars or other .tfvars file.
Update or create a terraform.tfvars file with the new W&B version and license key.
terraform plan -var-file="terraform.tfvars"
Apply the configuration. In your Terraform workspace directory execute:
terraform apply -var-file="terraform.tfvars"
Update with Helm
Update W&B with spec
Specify a new version by modifying the image.tag and/or license values in your Helm chart *.yaml configuration file:
For more details, see the upgrade guide in the public repository.
Update with admin UI
This method is only works for updating licenses that are not set with an environment variable in the W&B server container, typically in self-hosted Docker installations.
Obtain a new license from the W&B Deployment Page, ensuring it matches the correct organization and deployment ID for the deployment you are looking to upgrade.
Access the W&B Admin UI at <host-url>/system-settings.
An Organization is the root scope in your W&B account or instance. All actions in your account or instance take place within the context of that root scope, including managing users, managing teams, managing projects within teams, tracking usage and more.
If you are using Multi-tenant Cloud, you may have more than one organization where each may correspond to a business unit, a personal user, a joint partnership with another business and more.
If you are using Dedicated Cloud or a Self-managed instance, it corresponds to one organization. Your company may have more than one of Dedicated Cloud or Self-managed instances to map to different business units or departments, though that is strictly an optional way to manage AI practioners across your businesses or departments.
A Team is a subscope within a organization, that may map to a business unit / function, department, or a project team in your company. You may have more than one team in your organization depending on your deployment type and pricing plan.
AI projects are organized within the context of a team. The access control within a team is governed by team admins, who may or may not be admins at the parent organization level.
A Project is a subscope within a team, that maps to an actual AI project with specific intended outcomes. You may have more than one project within a team. Each project has a visibility mode which determines who can access it.
Authenticate your credentials with the W&B Server LDAP server. The following guide explains how to configure the settings for W&B Server. It covers mandatory and optional configurations, as well as instructions for configuring the LDAP connection from systems settings UI. it also provides information on the different inputs of the LDAP configuration, such as the address, base distinguished name, and attributes. You can specify these attributes from the W&B App UI or using environment variables. You can setup either an anonymous bind, or bind with an administrator DN and Password.
Only W&B Admin roles can enable and configure LDAP authentication.
Configure LDAP connection
Navigate to the W&B App.
Select your profile icon from the upper right. From the dropdown, select System Settings.
Toggle Configure LDAP Client.
Add the details in the form. Refer to Configuring Parameters section for details on each input.
Click on Update Settings to test your settings. This will establish a test client/connection with the W&B server.
If your connection is verified, toggle the Enable LDAP Authentication and select the Update Settings button.
Set LDAP an connection with the following environment variables:
Environment variable
Required
Example
LOCAL_LDAP_ADDRESS
Yes
ldaps://ldap.example.com:636
LOCAL_LDAP_BASE_DN
Yes
email=mail,group=gidNumber
LOCAL_LDAP_BIND_DN
No
cn=admin, dc=example,dc=org
LOCAL_LDAP_BIND_PW
No
LOCAL_LDAP_ATTRIBUTES
Yes
email=mail, group=gidNumber
LOCAL_LDAP_TLS_ENABLE
No
LOCAL_LDAP_GROUP_ALLOW_LIST
No
LOCAL_LDAP_LOGIN
No
See the Configuration parameters section for definitions of each environment variable. Note that the environment variable prefix LOCAL_LDAP was omitted from the definition names for clarity.
Configuration parameters
The following table lists and describes required and optional LDAP configurations.
Environment variable
Definition
Required
ADDRESS
This is the address of your LDAP server within the VPC that hosts W&B Server.
Yes
BASE_DN
The root path searches start from and required for doing any queries into this directory.
Yes
BIND_DN
Path of the administrative user registered in the LDAP server. This is required if the LDAP server does not support unauthenticated binding. If specified, W&B Server connects to the LDAP server as this user. Otherwise, W&B Server connects using anonymous binding.
No
BIND_PW
The password for administrative user, this is used to authenticate the binding. If left blank, W&B Server connects using anonymous binding.
No
ATTRIBUTES
Provide an email and group ID attribute names as comma separated string values.
Yes
TLS_ENABLE
Enable TLS.
No
GROUP_ALLOW_LIST
Group allowlist.
No
LOGIN
This tells W&B Server to use LDAP to authenticate. Set to either True or False. Optionally set this to false to test the LDAP configuration. Set this to true to start LDAP authentication.
No
5.2.1.2 - Configure SSO with OIDC
W&B Server’s support for OpenID Connect (OIDC) compatible identity providers allows for management of user identities and group memberships through external identity providers like Okta, Keycloak, Auth0, Google, and Entra.
OpenID Connect (OIDC)
W&B Server supports the following OIDC authentication flows for integrating with external Identity Providers (IdPs).
Implicit Flow with Form Post
Authorization Code Flow with Proof Key for Code Exchange (PKCE)
These flows authenticate users and provide W&B Server with the necessary identity information (in the form of ID tokens) to manage access control.
The ID token is a JWT that contains the user’s identity information, such as their name, username, email, and group memberships. W&B Server uses this token to authenticate the user and map them to appropriate roles or groups in the system.
In the context of W&B Server, access tokens authorize requests to APIs on behalf of the user, but since W&B Server’s primary concern is user authentication and identity, it only requires the ID token.
To assist with configuring Identity Providers for Dedicated cloud or Self-managed W&B Server installations, follow these guidelines to follow for various IdPs. If you’re using the SaaS version of W&B, reach out to support@wandb.com for assistance in configuring an Auth0 tenant for your organization.
Follow the procedure below to set up AWS Cognito for authorization:
First, sign in to your AWS account and navigate to the AWS Cognito App.
Provide an allowed callback URL to configure the application in your IdP:
Add http(s)://YOUR-W&B-HOST/oidc/callback as the callback URL. Replace YOUR-W&B-HOST with your W&B host path.
If your IdP supports universal logout, set the Logout URL to http(s)://YOUR-W&B-HOST. Replace YOUR-W&B-HOST with your W&B host path.
For example, if your application was running at https://wandb.mycompany.com, you would replace YOUR-W&B-HOST with wandb.mycompany.com.
The image below demonstrates how to provide allowed callback and sign-out URLs in AWS Cognito.
You can also configure wandb/local to perform an authorization_code grant that uses the PKCE Code Exchange flow.
Select one or more OAuth grant types to configure how AWS Cognito delivers tokens to your app.
W&B requires specific OpenID Connect (OIDC) scopes. Select the following from AWS Cognito App:
“openid”
“profile”
“email”
For example, your AWS Cognito App UI should look similar to the following image:
Select the Auth Method in the settings page or set the OIDC_AUTH_METHOD environment variable to tell wandb/local which grant to.
You must set the Auth Method to pkce.
You need a Client ID and the URL of your OIDC issuer. The OpenID discovery document must be available at $OIDC_ISSUER/.well-known/openid-configuration
For example, , you can generate your issuer URL by appending your User Pool ID to the Cognito IdP URL from the App Integration tab within the User Pools section:
Do not use the “Cognito domain” for the IDP URL. Cognito provides it’s discovery document at https://cognito-idp.$REGION.amazonaws.com/$USER_POOL_ID
Follow the procedure below to set up Okta for authorization:
On the overview screen of the Okta application that you just created, make note of the Client ID under Client Credentials under the General tab:
To identify the Okta OIDC Issuer URL, select Settings and then Account on the left side.
The Okta UI shows the company name under Organization Contact.
The OIDC issuer URL has the following format: https://COMPANY.okta.com. Replace COMPANY with the corresponding value. Make note of it.
On the screen named “Register an application,” fill out the values as follows:
Specify a name, for example “Weights and Biases application”
By default the selected account type is: “Accounts in this organizational directory only (Default Directory only - Single tenant).” Modify if you need to.
Configure Redirect URI as type Web with value: https://YOUR_W_AND_B_URL/oidc/callback
Click “Register.”
Make a note of the “Application (client) ID” and “Directory (tenant) ID.”
On the left side, click Authentication.
Under Front-channel logout URL, specify: https://YOUR_W_AND_B_URL/logout
Click “Save.”
On the left side, click “Certificates & secrets.”
Click “Client secrets” and then click “New client secret.”
On the screen named “Add a client secret,” fill out the values as follows:
Enter a description, for example “wandb”
Leave “Expires” as is or change if you have to.
Click “Add.”
Make a note of the “Value” of the secret. There is no need for the “Secret ID.”
You should now have made notes of three values:
OIDC Client ID
OIDC Client Secret
Tenant ID is needed for the OIDC Issuer URL
The OIDC issuer URL has the following format: https://login.microsoftonline.com/${TenantID}/v2.0
Set up SSO on the W&B Server
To set up SSO, you need administrator privileges and the following information:
OIDC Client ID
OIDC Auth method (implicit or pkce)
OIDC Issuer URL
OIDC Client Secret (optional; depends on how you have setup your IdP)
Should your IdP require a OIDC Client Secret, specify it with the environment variable OIDC_SECRET.
You can configure SSO using either the W&B Server UI or by passing environment variables to the wandb/local pod. The environment variables take precedence over UI.
If you’re unable to log in to your instance after configuring SSO, you can restart the instance with the LOCAL_RESTORE=true environment variable set. This outputs a temporary password to the containers logs and disables SSO. Once you’ve resolved any issues with SSO, you must remove that environment variable to enable SSO again.
The System Console is the successor to the System Settings page. It is available with the W&B Kubernetes Operator based deployment.
Navigate to Settings, then Authentication. Select OIDC in the Type dropdown.
Enter the values.
Click on Save.
Log out and then log back in, this time using the IdP login screen.
Sign in to your Weights&Biases instance.
Navigate to the W&B App.
From the dropdown, select System Settings:
Enter your Issuer, Client ID, and Authentication Method.
Select Update settings.
If you’re unable to log in to your instance after configuring SSO, you can restart the instance with the LOCAL_RESTORE=true environment variable set. This outputs a temporary password to the containers logs and turn off SSO. Once you’ve resolved any issues with SSO, you must remove that environment variable to enable SSO again.
Security Assertion Markup Language (SAML)
W&B Server does not support SAML.
5.2.1.3 - Use federated identities with SDK
Use identity federation to sign in using your organizational credentials through W&B SDK. If your W&B organization admin has configured SSO for your organization, then you already use your organizational credentials to sign-in to the W&B app UI. In that sense, identity federation is like SSO for W&B SDK, but by using JSON Web Tokens (JWTs) directly. You can use identity federation as an alternative to API keys.
RFC 7523 forms the underlying basis for identity federation with SDK.
Identity federation is available in Preview for Enterprise plans on all platform types - SaaS Cloud, Dedicated Cloud, and Self-managed instances. Reach out to your W&B team for any questions.
For the purpose of this document, the terms identity provider and JWT issuer are used interchangeably. Both refer to one and the same thing in the context of this capability.
JWT issuer setup
As a first step, an organization admin must set up a federation between your W&B organization and a publicly accessible JWT issuer.
Go to the Settings tab in your organization dashboard
In the Authentication option, press Set up JWT Issuer
Add the JWT issuer URL in the text box and press Create
W&B will automatically look for a OIDC discovery document at the path ${ISSUER_URL}/.well-known/oidc-configuration, and try to find the JSON Web Key Set (JWKS) at a relevant URL in the discovery document. The JWKS is used for real-time validation of the JWTs to ensure that those have been issued by the relevant identity provider.
Using the JWT to access W&B
Once a JWT issuer has been setup for your W&B organization, users can start accessing the relevant W&B projects using JWTs issued by that identity provider. The mechanism for using JWTs is as follows:
You must sign-in to the identity provider using one of the mechanisms available in your organization. Some providers can be accessed in an automated manner using an API or SDK, while some can only be accessed using a relevant UI. Reach out to your W&B organization admin or the owner of the JWT issuer for details.
Once you’ve retrieved the JWT after signing in to your identity provider, store it in a file at a secure location and configure the absolute file path in an environment variable WANDB_IDENTITY_TOKEN_FILE.
Access your W&B project using the W&B SDK or CLI. The SDK or CLI should automatically detect the JWT and exchange it for a W&B access token after the JWT has been successfully validated. The W&B access token is used to access the relevant APIs for enabling your AI workflows, that is, to log runs, metrics, artifacts and so forth. The access token is by default stored at the path ~/.config/wandb/credentials.json. You can change that path by specifying the environment variable WANDB_CREDENTIALS_FILE.
JWTs are meant to be short-lived credentials to address the shortcomings of long-lived credentials like API keys, passwords and so forth. Depending on the JWT expiry time configured in your identity provider, you must continuously refresh the JWT and ensure that it’s stored in the file referenced by the environment variable WANDB_IDENTITY_TOKEN_FILE.
W&B access token also has a default expiry duration, after which the SDK or the CLI automatically try to refresh that using your JWT. If the user JWT has also expired by that time and is not refreshed, that could result in an authentication failure. If possible, the JWT retrieval and post-expiry refresh mechanism should be implemented as part of the AI workload that uses the W&B SDK or CLI.
JWT validation
As part of the workflow to exchange the JWT for a W&B access token and then access a project, the JWT undergoes following validations:
The JWT signature is verified using the JWKS at the W&B organization level. This is the first line of defense, and if this fails, that means there’s a problem with your JWKS or how your JWT is signed.
The iss claim in the JWT should be equal to the issuer URL configured at the organization level.
The sub claim in the JWT should be equal to the user’s email address as configured in the W&B organization.
The aud claim in the JWT should be equal to the name of the W&B organization which houses the project that you are accessing as part of your AI workflow. In case of Dedicated Cloud or Self-managed instances, you could configure an instance-level environment variable SKIP_AUDIENCE_VALIDATION to true to skip validation of the audience claim, or use wandb as the audience.
The exp claim in the JWT is checked to see if the token is valid or has expired and needs to be refreshed.
External service accounts
W&B has supported built-in service accounts with long-lived API keys for long. With the identity federation capability for SDK and CLI, you can also bring external service accounts that could use JWTs for authentication, though as long as those are issued by the same issuer which is configured at the organization level. A team admin can configure external service accounts within the scope of a team, like the built-in service accounts.
To configure an external service account:
Go to the Service Accounts tab for your team
Press New service account
Provide a name for the service account, select Federated Identity as the Authentication Method, provide a Subject, and press Create
The sub claim in the external service account’s JWT should be same as what the team admin configures as its subject in the team-level Service Accounts tab. That claim is verified as part of JWT validation. The aud claim requirement is similar to that for human user JWTs.
When using an external service account’s JWT to access W&B, it’s typically easier to automate the workflow to generate the initial JWT and continuously refresh it. If you would like to attribute the runs logged using an external service account to a human user, you can configure the environment variables WANDB_USERNAME or WANDB_USER_EMAIL for your AI workflow, similar to how it’s done for the built-in service accounts.
W&B recommends to use a mix of built-in and external service accounts across your AI workloads with different levels of data sensitivity, in order to strike a balance between flexibility and simplicity.
5.2.1.4 - Use service accounts to automate workflows
Manage automated or non-interactive workflows using org and team scoped service accounts
A service account represents a non-human or machine user that can automatically perform common tasks across projects within a team or across teams.
An org admin can create a service account at the scope of the organization.
A team admin can create a service account at the scope of that team.
A service account’s API key allows the caller to read from or write to projects within the service account’s scope.
Service accounts allow for centralized management of workflows by multiple users or teams, to automate experiment tracking for W&B Models or to log traces for W&B Weave. You have the option to associate a human user’s identity with a workflow managed by a service account, by using either of the environment variablesWANDB_USERNAME or WANDB_USER_EMAIL.
Service accounts scoped to an organization have permissions to read and write in all projects in the organization, regardless of the team, with the exception of restricted projects. Before an organization-scoped service account can access a restricted project, an admin of that project must explicitly add the service account to the project.
An organization admin can obtain the API key for an organization-scoped service account from the Service Accounts tab of the organization or account dashboard.
To create a new organization-scoped service account:
Click New service account button in the Service Accounts tab of your organization dashboard.
Enter a Name.
Select a default team for the service account.
Click Create.
Next to the newly created service account, click Copy API key.
Store the copied API key in a secret manager or another secure but accessible location.
An organization-scoped service account requires a default team, even though it has access to non-restricted projects owned by all teams within the organization. This helps to prevent a workload from failing if the WANDB_ENTITY variable is not set in the environment for your model training or generative AI app. To use an organization-scoped service account for a project in a different team, you must set the WANDB_ENTITY environment variable to that team.
Team-scoped service accounts
A team-scoped service account can read and write in all projects within its team, except to restricted projects in that team. Before a team-scoped service account can access a restricted project, an admin of that project must explicitly add the service account to the project.
As a team admin, you can get the API key for a team-scoped service account in your team at <WANDB_HOST_URL>/<your-team-name>/service-accounts. Alternatively you can go to the Team settings for your team and then refer to the Service Accounts tab.
To create a new team scoped service account for your team:
Click New service account button in the Service Accounts tab of your team.
Enter a Name.
Select Generate API key (Built-in) as the authentication method.
Click Create.
Next to the newly created service account, click Copy API key.
Store the copied API key in a secret manager or another secure but accessible location.
If you do not configure a team in your model training or generative AI app environment that uses a team-scoped service account, the model runs or weave traces log to the named project within the service account’s parent team. In such a scenario, user attribution using the WANDB_USERNAME or WANDB_USER_EMAIL variables do not work unless the referenced user is part of the service account’s parent team.
A team-scoped service account cannot log runs to a team or restricted-scoped project in a team different from its parent team, but it can log runs to an open visibility project within another team.
External service accounts
In addition to Built-in service accounts, W&B also supports team-scoped External service accounts with the W&B SDK and CLI using Identity federation with identity providers (IdPs) that can issue JSON Web Tokens (JWTs).
5.2.2 - Access management
Manage users and teams within an organization
The first user to sign up to W&B with a unique organization domain is assigned as that organization’s instance administrator role. The organization administrator assigns specific users team administrator roles.
W&B recommends to have more than one instance admin in an organization. It is a best practice to ensure that admin operations can continue when the primary admin is not available.
A team administrator is a user in organization that has administrative permissions within a team.
The organization administrator can access and use an organization’s account settings at https://wandb.ai/account-settings/ to invite users, assign or update a user’s role, create teams, remove users from your organization, assign the billing administrator, and more. See Add and manage users for more information.
Once an organization administrator creates a team, the instance administrator or ateam administrator can:
Invite users to that team or remove users from the team.
Assign or update a team member’s role.
Automatically add new users to a team when they join your organization.
Both the organization administrator and the team administrator use team dashboards at https://wandb.ai/<your-team-name> to manage teams. For more information on what organization administrators and team administrators can do, see Add and manage teams.
Limit visibility to specific projects
Define the scope of a W&B project to limit who can view, edit, and submit W&B runs to it. Limiting who can view a project is particularly useful if a team works with sensitive or confidential data.
An organization admin, team admin, or the owner of a project can both set and edit a project’s visibility.
The following workflow applies to users with instance administrator roles. Reach out to an administrator in your organization if you believe you should have instance administrator permissions.
In the upper right corner of the page, select the User menu dropdown. Within the Account section of the dropdown, select Settings.
Within the Settings tab, select General.
Select the Change name button.
Within the modal that appears, provide a new name for your organization and select the Save name button.
Add and manage users
As an administrator, use your organization’s dashboard to:
Invite or remove users.
Assign or update a user’s role.
Assign the billing administrator.
There are several ways an organization administrator can add users to an organization:
Member-by-invite
Auto provisioning with SSO
Domain capture
Seats and pricing
The proceeding table summarizes how seats work for Models and Weave:
Product
Seats
Cost based on
Models
Pay per set
How many Models paid seats you have, and how much usage you’ve accrued determines your overall subscription cost. Each user can be assigned one of the three available seat types: Full, Viewer, and No-Access
Weave
Free
Usage based
Invite a user
Administrators can invite users to their organization, as well as specific teams within the organization.
In the upper right corner of the page, select the User menu dropdown. Within the Account section of the dropdown, select Users.
Select Invite new user.
In the modal that appears, provide the email or username of the user in the Email or username field.
(Recommended) Add the user to a team from the Choose teams dropdown menu.
From the Select role dropdown, select the role to assign to the user. You can change the user’s role at a later time. See the table listed in Assign a role for more information about possible roles.
Choose the Send invite button.
W&B sends an invite link using a third-party email server to the user’s email after you select the Send invite button. A user can access your organization once they accept the invite.
Navigate to https://<org-name>.io/console/settings/. Replace <org-name> with your organization name.
Select the Add user button
Within the modal that appears, provide the email of the new user in the Email field.
Select a role to assign to the user from the Role dropdown. You can change the user’s role at a later time. See the table listed in Assign a role for more information about possible roles.
Check the Send invite email to user box if you want W&B to send an invite link using a third-party email server to the user’s email.
Select the Add new user button.
Auto provision users
A W&B user with matching email domain can sign in to your W&B Organization with Single Sign-On (SSO) if you configure SSO and your SSO provider permits it. SSO is available for all Enterprise licenses.
Enable SSO for authentication
W&B strongly recommends and encourages that users authenticate using Single Sign-On (SSO). Reach out to your W&B team to enable SSO for your organization.
To learn more about how to setup SSO with Dedicated cloud or Self-managed instances, refer to SSO with OIDC or SSO with LDAP.
W&B assigned auto-provisioning users “Member” roles by default. You can change the role of auto-provisioned users at any time.
Auto-provisioning users with SSO is on by default for Dedicated cloud instances and Self-managed deployments. You can turn off auto provisioning. Turning auto provisioning off enables you to selectively add specific users to your W&B organization.
The proceeding tabs describe how to turn off SSO based on deployment type:
Reach out to your W&B team if you are on Dedicated cloud instance and you want to turn off auto provisioning with SSO.
Use the W&B Console to turn off auto provisioning with SSO:
Navigate to https://<org-name>.io/console/settings/. Replace <org-name> with your organization name.
Choose Security
Select the Disable SSO Provisioning to turn off auto provisioning with SSO.
Auto provisioning with SSO is useful for adding users to an organization at scale because organization administrators do not need to generate individual user invitations.
Domain capture
Domain capture helps your employees join the your companies organization to ensure new users do not create assets outside of your company jurisdiction.
Domains must be unique
Domains are unique identifiers. This means that you can not use a domain that is already in use by another organization.
Domain capture lets you automatically add people with a company email address, such as @example.com, to your W&B SaaS cloud organization. This helps all your employees join the right organization and ensures that new users do not create assets outside of your company jurisdiction.
This table summarizes the behavior of new and existing users with and without domain capture enabled:
With domain capture
Without domain capture
New users
Users who sign up for W&B from verified domains are automatically added as members to your organization’s default team. They can choose additional teams to join at sign up, if you enable team joining. They can still join other organizations and teams with an invitation.
Users can create W&B accounts without knowing there is a centralized organization available.
Invited users
Invited users automatically join your organization when accepting your invite. Invited users are not automatically added as members to your organization’s default team. They can still join other organizations and teams with an invitation.
Invited users automatically join your organization when accepting your invite. They can still join other organizations and teams with an invitation.
Existing users
Existing users with verified email addresses from your domains can join your organization’s teams within the W&B App. All data that existing users create before joining your organization remains. W&B does not migrate the existing user’s data.
Existing W&B users may be spread across multiple organizations and teams.
To automatically assign non-invited new users to a default team when they join your organization:
In the upper right corner of the page, select the User menu dropdown. From the dropdown, choose Settings.
Within the Settings tab, select General.
Choose the Claim domain button within Domain capture.
Select the team that you want new users to automatically join from the Default team dropdown. If no teams are available, you’ll need to update team settings. See the instructions in Add and manage teams.
Click the Claim email domain button.
You must enable domain matching within a team’s settings before you can automatically assign non-invited new users to that team.
Navigate to the team’s dashboard at https://wandb.ai/<team-name>. Where <team-name> is the name of the team you want to enable domain matching.
Select Team settings in the global navigation on the left side of the team’s dashboard.
Within the Privacy section, toggle the “Recommend new users with matching email domains join this team upon signing up” option.
Reach out to your W&B Account Team if you use Dedicated or Self-managed deployment type to configure domain capture. Once configured, your W&B SaaS instance automatically prompts users who create a W&B account with your company email address to contact your administrator to request access to your Dedicated or Self-managed instance.
With domain capture
Without domain capture
New users
Users who sign up for W&B on SaaS cloud from verified domains are automatically prompted to contact an administrator with an email address you customize. They can still create an organizations on SaaS cloud to trial the product.
Users can create W&B SaaS cloud accounts without learning their company has a centralized dedicated instance.
Existing users
Existing W&B users may be spread across multiple organizations and teams.
Existing W&B users may be spread across multiple organizations and teams.
Assign or update a user’s role
Every member in an Organization has an organization role and seat for both W&B Models and Weave. The type of seat they have determines both their billing status and the actions they can take in each product line.
You initially assign an organization role to a user when you invite them to your organization. You can change any user’s role at a later time.
A user within an organization can have one of the proceeding roles:
Role
Descriptions
Administrator
A instance administrator who can add or remove other users to the organization, change user roles, manage custom roles, add teams and more. W&B recommends ensuring there is more than one administrator in the event that your administrator is unavailable.
Member
A regular user of the organization, invited by an instance administrator. A organization member cannot invite other users or manage existing users in the organization.
Viewer (Enterprise-only feature)
A view-only user of your organization, invited by an instance administrator. A viewer only has read access to the organization and the underlying teams that they are a member of.
Custom Roles (Enterprise-only feature)
Custom roles allow organization administrators to compose new roles by inheriting from the preceding View-Only or Member roles, and adding additional permissions to achieve fine-grained access control. Team administrators can then assign any of those custom roles to users in their respective teams.
In the upper right corner of the page, select the User menu dropdown. From the dropdown, choose Users.
Provide the name or email of the user in the search bar.
Select a role from the TEAM ROLE dropdown next to the name of the user.
Assign or update a user’s access
A user within an organization has one of the proceeding model seat or weave access types: full, viewer, or no access.
Seat type
Description
Full
Users with this role type have full permissions to write, read, and export data for Models or Weave.
Viewer
A view-only user of your organization. A viewer only has read access to the organization and the underlying teams that they are a part of, and view only access to Models or Weave.
No access
Users with this role have no access to the Models or Weave products.
Model seat type and weave access type are defined at the organization level, and inherited by the team. If you want to change a user’s seat type, navigate to the organization settings and follow the proceeding steps:
For SaaS users, navigate to your organization’s settings at https://wandb.ai/account-settings/<organization>/settings. Ensure to replace the values enclosed in angle brackets (<>) with your organization name. For other Dedicated and Self-managed deployments, navigate to https://<your-instance>.wandb.io/org/dashboard.
Select the Users tab.
From the Role dropdown, select the seat type you want to assign to the user.
The organization role and subscription type determines which seat types are available within your organization.
Select Create a team to collaborate on the left navigation panel underneath Teams.
Provide a name for your team in the Team name field in the modal that appears.
Choose a storage type.
Select the Create team button.
After you select Create team button, W&B redirects you to a new team page at https://wandb.ai/<team-name>. Where <team-name> consists of the name you provide when you create a team.
Once you have a team, you can add users to that team.
Invite users to a team
Invite users to a team in your organization. Use the team’s dashboard to invite users using their email address or W&B username if they already have a W&B account.
Navigate to https://wandb.ai/<team-name>.
Select Team settings in the global navigation on the left side of the dashboard.
Select the Users tab.
Choose on Invite a new user.
Within the modal that appears, provide the email of the user in the Email or username field and select the role to assign to that user from the Select a team role dropdown. For more information about roles a user can have in a team, see Team roles.
Match members to a team organization during sign up
Allow new users within your organization discover Teams within your organization when they sign-up. New users must have a verified email domain that matches your organization’s verified email domain. Verified new users can view a list of verified teams that belong to an organization when they sign up for a W&B account.
An organization administrator must enable domain claiming. To enable domain capture, see the steps described in Domain capture.
Assign or update a team member’s role
Select the account type icon next to the name of the team member.
From the drop-down, choose the account type you want that team member to posses.
This table lists the roles you can assign to a member of a team:
Role
Definition
Administrator
A user who can add and remove other users in the team, change user roles, and configure team settings.
Member
A regular user of a team, invited by email or their organization-level username by the team administrator. A member user cannot invite other users to the team.
View-Only (Enterprise-only feature)
A view-only user of a team, invited by email or their organization-level username by the team administrator. A view-only user only has read access to the team and its contents.
Service (Enterprise-only feature)
A service worker or service account is an API key that is useful for utilizing W&B with your run automation tools. If you use an API key from a service account for your team, ensure to set the environment variable WANDB_USERNAME to correctly attribute runs to the appropriate user.
Custom Roles (Enterprise-only feature)
Custom roles allow organization administrators to compose new roles by inheriting from the preceding View-Only or Member roles, and adding additional permissions to achieve fine-grained access control. Team administrators can then assign any of those custom roles to users in their respective teams. Refer to this article for details.
Only enterprise licenses on Dedicated cloud or Self-managed deployment can assign custom roles to members in a team.
Remove users from a team
Remove a user from a team using the team’s dashboard. W&B preserves runs created in a team even if the member who created the runs is no longer on that team.
Navigate to https://wandb.ai/<team-name>.
Select Team settings in the left navigation bar.
Select the Users tab.
Hover your mouse next to the name of the user you want to delete. Select the ellipses or three dots icon (…) when it appears.
From the dropdown, select Remove user.
5.2.2.2 - Manage access control for projects
Manage project access using visibility scopes and project-level roles
Define the scope of a W&B project to limit who can view, edit, and submit W&B runs to it.
You can use a combination of a couple of controls to configure the access level for any project within a W&B team. Visibility scope is the higher-level mechanism. Use that to control which groups of users can view or submit runs in a project. For a project with Team or Restricted visibility scope, you can then use Project level roles to control the level of access that each user has within the project.
The owner of a project, a team admin, or an organization admin can set or edit a project’s visibility.
Visibility scopes
There are four project visibility scopes you can choose from. In order of most public to most private, they are:
Scope
Description
Open
Anyone who knows about the project can view it and submit runs or reports.
Public
Anyone who knows about the project can view it. Only your team can submit runs or reports.
Team
Only members of the parent team can view the project and submit runs or reports. Anyone outside the team can not access the project.
Restricted
Only invited members from the parent team can view the project and submit runs or reports.
Set a project’s scope to Restricted if you would like to collaborate on workflows related to sensitive or confidential data. When you create a restricted project within a team, you can invite or add specific members from the team to collaborate on relevant experiments, artifacts, reports, and so forth.
Unlike other project scopes, all members of a team do not get implicit access to a restricted project. At the same time, team admins can join restricted projects if needed.
Set visibility scope on a new or existing project
Set a project’s visibility scope when you create a project or when editing it later.
Only the owner of the project or a team admin can set or edit its visibility scope.
When a team admin enables Make all future team projects private (public sharing not allowed) within a team’s privacy setting, that turns off Open and Public project visibility scopes for that team. In this case, your team can only use Team and Restricted scopes.
Set visibility scope when you create a new project
Navigate to your W&B organization on SaaS Cloud, Dedicated Cloud, or Self-managed instance.
Click the Create a new project button in the left hand sidebar’s My projects section. Alternatively, navigate to the Projects tab of your team and click the Create new project button in the upper right hand corner.
After selecting the parent team and entering the name of the project, select the desired scope from the Project Visibility dropdown.
Complete the following step if you select Restricted visibility.
Provide names of one or more W&B team members in the Invite team members field. Add only those members who are essential to collaborate on the project.
You can add or remove members in a restricted project later, from its Users tab.
Edit visibility scope of an existing project
Navigate to your W&B Project.
Select the Overview tab on the left column.
Click the Edit Project Details button on the upper right corner.
From the Project Visibility dropdown, select the desired scope.
Complete the following step if you select Restricted visibility.
Go to the Users tab in the project, and click Add user button to invite specific users to the restricted project.
All members of a team lose access to a project if you change its visibility scope from Team to Restricted, unless you invite the required team members to the project.
All members of a team get access to a project if you change its visibility scope from Restricted to Team.
If you remove a team member from the user list for a restricted project, they lose access to that project.
Other key things to note for restricted scope
If you want to use a team-level service account in a restricted project, you should invite or add that specifically to the project. Otherwise a team-level service account can not access a restricted project by default.
You can not move runs from a restricted project, but you can move runs from a non-restricted project to a restricted one.
You can convert the visibility of a restricted project to only Team scope, irrespective of the team privacy setting Make all future team projects private (public sharing not allowed).
If the owner of a restricted project is not part of the parent team anymore, the team admin should change the owner to ensure seamless operations in the project.
Project level roles
For the Team or Restricted scoped projects in your team, you can assign a specific role to a user, which could be different from that user’s team level role. For example, if a user has Member role at the team level, you can assign the View-Only, or Admin, or any available custom role to that user within a Team or Restricted scope project in that team.
Project level roles are in preview on SaaS Cloud, Dedicated Cloud, and Self-managed instances.
Assign project level role to a user
Navigate to your W&B Project.
Select the Overview tab on the left column.
Go to the Users tab in the project.
Click the currently assigned role for the pertinent user in the Project Role field, which should open up a dropdown listing the other available roles.
Select another role from the dropdown. It should save instantly.
When you change the project level role for a user to be different from their team level role, the project level role includes a * to indicate the difference.
Other key things to note for project level roles
By default, project level roles for all users in a team or restricted scoped project inherit their respective team level roles.
You can not change the project level role of a user who has View-only role at the team level.
If the project level role for a user within a particular project is same as the team level role, and at some point if a team admin changes the team level role, the relevant project role is automatically changed to track the team level role.
If you change the project level role for a user within a particular project such that it is different from the team level role, and at some point if a team admin changes the team level role, the relevant project level role remains as is.
If you remove a user from a restricted project when their project level role was different from the team level role, and if you then add the user back to the project after some time, they would inherit the team level role due to the default behavior. If needed, you would need to change the project level role again to be different from the team level role.
5.2.3 - Automate user and team management
SCIM API
Use SCIM API to manage users, and the teams they belong to, in an efficient and repeatable manner. You can also use the SCIM API to manage custom roles or assign roles to users in your W&B organization. Role endpoints are not part of the official SCIM schema. W&B adds role endpoints to support automated management of custom roles.
SCIM API is especially useful if you want to:
manage user provisioning and de-provisioning at scale
manage users with a SCIM-supporting Identity Provider
There are broadly three categories of SCIM API - User, Group, and Roles.
User SCIM API
User SCIM API allows for creating, deactivating, getting the details of a user, or listing all users in a W&B organization. This API also supports assigning predefined or custom roles to users in an organization.
Deactivate a user within a W&B organization with the DELETE User endpoint. Deactivated users can no longer sign in. However, deactivated users still appears in the organization’s user list.
It is possible to re-enable a deactivated user, if needed.
Group SCIM API
Group SCIM API allows for managing W&B teams, including creating or removing teams in an organization. Use the PATCH Group to add or remove users in an existing team.
There is no notion of a group of users having the same role within W&B. A W&B team closely resembles a group, and allows diverse personas with different roles to work collaboratively on a set of related projects. Teams can consist of different groups of users. Assign each user in a team a role: team admin, member, viewer, or a custom role.
W&B maps Group SCIM API endpoints to W&B teams because of the similarity between groups and W&B teams.
Custom role API
Custom role SCIM API allows for managing custom roles, including creating, listing, or updating custom roles in an organization.
Delete a custom role with caution.
Delete a custom role within a W&B organization with the DELETE Role endpoint. The predefined role that the custom role inherits is assigned to all users that are assigned the custom role before the operation.
Update the inherited role for a custom role with the PUT Role endpoint. This operation doesn’t affect any of the existing, that is, non-inherited custom permissions in the custom role.
W&B Python SDK API
Just like how SCIM API allows you to automate user and team management, you can also use some of the methods available in the W&B Python SDK API for that purpose. Keep a note of the following methods:
Method name
Purpose
create_user(email, admin=False)
Add a user to the organization and optionally make them the organization admin.
user(userNameOrEmail)
Return an existing user in the organization.
user.teams()
Return the teams for the user. You can get the user object using the user(userNameOrEmail) method.
create_team(teamName, adminUserName)
Create a new team and optionally make an organization-level user the team admin.
team(teamName)
Return an existing team in the organization.
Team.invite(userNameOrEmail, admin=False)
Add a user to the team. You can get the team object using the team(teamName) method.
Team.create_service_account(description)
Add a service account to the team. You can get the team object using the team(teamName) method.
Member.delete()
Remove a member user from a team. You can get the list of member objects in a team using the team object’s members attribute. And you can get the team object using the team(teamName) method.
5.2.4 - Manage users, groups, and roles with SCIM
The System for Cross-domain Identity Management (SCIM) API allows instance or organization admins to manage users, groups, and custom roles in their W&B organization. SCIM groups map to W&B teams.
The SCIM API is accessible at <host-url>/scim/ and supports the /Users and /Groups endpoints with a subset of the fields found in the RC7643 protocol. It additionally includes the /Roles endpoints which are not part of the official SCIM schema. W&B adds the /Roles endpoints to support automated management of custom roles in W&B organizations.
SCIM API applies to all hosting options including Dedicated Cloud, Self-managed instances, and SaaS Cloud. In SaaS Cloud, the organization admin must configure the default organization in user settings to ensure that the SCIM API requests go to the right organization. The setting is available in the section SCIM API Organization within user settings.
Authentication
The SCIM API is accessible by instance or organization admins using basic authentication with their API key. With basic authentication, send the HTTP request with the Authorization header that contains the word Basic followed by a space and a base64-encoded string for username:password where password is your API key. For example, to authorize as demo:p@55w0rd, the header should be Authorization: Basic ZGVtbzpwQDU1dzByZA==.
User resource
The SCIM user resource maps to W&B users.
Get user
Endpoint:<host-url>/scim/Users/{id}
Method: GET
Description: Retrieve the information for a specific user in your SaaS Cloud organization or your Dedicated Cloud or Self-managed instance by providing the user’s unique ID.
Description: Fully delete a user from your SaaS Cloud organization or your Dedicated Cloud or Self-managed instance by providing the user’s unique ID. Use the Create user API to add the user again to the organization or instance if needed.
Request Example:
To temporarily deactivate the user, refer to Deactivate user API which uses the PATCH endpoint.
DELETE /scim/Users/abc
Response Example:
(Status204)
Deactivate user
Endpoint: <host-url>/scim/Users/{id}
Method: PATCH
Description: Temporarily deactivate a user in your Dedicated Cloud or Self-managed instance by providing the user’s unique ID. Use the Reactivate user API to reactivate the user when needed.
Supported Fields:
Field
Type
Required
op
String
Type of operation. The only allowed value is replace.
value
Object
Object {"active": false} indicating that the user should be deactivated.
User deactivation and reactivation operations are not supported in SaaS Cloud.
Description: Assign an organization-level role to a user. The role can be one of admin, viewer or member as described here. For SaaS Cloud, ensure that you have configured the correct organization for SCIM API in user settings.
Supported Fields:
Field
Type
Required
op
String
Type of operation. The only allowed value is replace.
path
String
The scope at which role assignment operation takes effect. The only allowed value is organizationRole.
value
String
The predefined organization-level role to assign to the user. It can be one of admin, viewer or member. This field is case insensitive for predefined roles.
Request Example:
PATCH /scim/Users/abc
{
"schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
"Operations": [
{
"op": "replace",
"path": "organizationRole",
"value": "admin"// will set the user's organization-scoped role to admin
}
]
}
Response Example:
This returns the User object.
(Status 200)
{
"active": true,
"displayName": "Dev User 1",
"emails": {
"Value": "dev-user1@test.com",
"Display": "",
"Type": "",
"Primary": true },
"id": "abc",
"meta": {
"resourceType": "User",
"created": "2023-10-01T00:00:00Z",
"lastModified": "2023-10-01T00:00:00Z",
"location": "Users/abc" },
"schemas": [
"urn:ietf:params:scim:schemas:core:2.0:User" ],
"userName": "dev-user1",
"teamRoles": [ // Returns the user's roles in all the teams that they are a part of
{
"teamName": "team1",
"roleName": "admin" }
],
"organizationRole": "admin"// Returns the user's role at the organization scope
}
Assign team-level role to user
Endpoint: <host-url>/scim/Users/{id}
Method: PATCH
Description: Assign a team-level role to a user. The role can be one of admin, viewer, member or a custom role as described here. For SaaS Cloud, ensure that you have configured the correct organization for SCIM API in user settings.
Supported Fields:
Field
Type
Required
op
String
Type of operation. The only allowed value is replace.
path
String
The scope at which role assignment operation takes effect. The only allowed value is teamRoles.
value
Object array
A one-object array where the object consists of teamName and roleName attributes. The teamName is the name of the team where the user holds the role, and roleName can be one of admin, viewer, member or a custom role. This field is case insensitive for predefined roles and case sensitive for custom roles.
Request Example:
PATCH /scim/Users/abc
{
"schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
"Operations": [
{
"op": "replace",
"path": "teamRoles",
"value": [
{
"roleName": "admin", // role name is case insensitive for predefined roles and case sensitive for custom roles
"teamName": "team1"// will set the user's role in the team team1 to admin
}
]
}
]
}
Response Example:
This returns the User object.
(Status 200)
{
"active": true,
"displayName": "Dev User 1",
"emails": {
"Value": "dev-user1@test.com",
"Display": "",
"Type": "",
"Primary": true },
"id": "abc",
"meta": {
"resourceType": "User",
"created": "2023-10-01T00:00:00Z",
"lastModified": "2023-10-01T00:00:00Z",
"location": "Users/abc" },
"schemas": [
"urn:ietf:params:scim:schemas:core:2.0:User" ],
"userName": "dev-user1",
"teamRoles": [ // Returns the user's roles in all the teams that they are a part of
{
"teamName": "team1",
"roleName": "admin" }
],
"organizationRole": "admin"// Returns the user's role at the organization scope
}
Group resource
The SCIM group resource maps to W&B teams, that is, when you create a SCIM group in a W&B deployment, it creates a W&B team. Same applies to other group endpoints.
Get team
Endpoint: <host-url>/scim/Groups/{id}
Method: GET
Description: Retrieve team information by providing the team’s unique ID.
Deleting teams is currently unsupported by the SCIM API since there is additional data linked to teams. Delete teams from the app to confirm you want everything deleted.
Role resource
The SCIM role resource maps to W&B custom roles. As mentioned earlier, the /Roles endpoints are not part of the official SCIM schema, W&B adds /Roles endpoints to support automated management of custom roles in W&B organizations.
Get custom role
Endpoint:<host-url>/scim/Roles/{id}
Method: GET
Description: Retrieve information for a custom role by providing the role’s unique ID.
Request Example:
GET /scim/Roles/abc
Response Example:
(Status 200)
{
"description": "A sample custom role for example",
"id": "Um9sZTo3",
"inheritedFrom": "member", // indicates the predefined role
"meta": {
"resourceType": "Role",
"created": "2023-11-20T23:10:14Z",
"lastModified": "2023-11-20T23:31:23Z",
"location": "Roles/Um9sZTo3" },
"name": "Sample custom role",
"organizationID": "T3JnYW5pemF0aW9uOjE0ODQ1OA==",
"permissions": [
{
"name": "artifact:read",
"isInherited": true// inherited from member predefined role
},
...... {
"name": "project:update",
"isInherited": false// custom permission added by admin
}
],
"schemas": [
"" ]
}
List custom roles
Endpoint:<host-url>/scim/Roles
Method: GET
Description: Retrieve information for all custom roles in the W&B organization
Request Example:
GET /scim/Roles
Response Example:
(Status 200)
{
"Resources": [
{
"description": "A sample custom role for example",
"id": "Um9sZTo3",
"inheritedFrom": "member", // indicates the predefined role that the custom role inherits from
"meta": {
"resourceType": "Role",
"created": "2023-11-20T23:10:14Z",
"lastModified": "2023-11-20T23:31:23Z",
"location": "Roles/Um9sZTo3" },
"name": "Sample custom role",
"organizationID": "T3JnYW5pemF0aW9uOjE0ODQ1OA==",
"permissions": [
{
"name": "artifact:read",
"isInherited": true// inherited from member predefined role
},
...... {
"name": "project:update",
"isInherited": false// custom permission added by admin
}
],
"schemas": [
"" ]
},
{
"description": "Another sample custom role for example",
"id": "Um9sZToxMg==",
"inheritedFrom": "viewer", // indicates the predefined role that the custom role inherits from
"meta": {
"resourceType": "Role",
"created": "2023-11-21T01:07:50Z",
"location": "Roles/Um9sZToxMg==" },
"name": "Sample custom role 2",
"organizationID": "T3JnYW5pemF0aW9uOjE0ODQ1OA==",
"permissions": [
{
"name": "launchagent:read",
"isInherited": true// inherited from viewer predefined role
},
...... {
"name": "run:stop",
"isInherited": false// custom permission added by admin
}
],
"schemas": [
"" ]
}
],
"itemsPerPage": 9999,
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:ListResponse" ],
"startIndex": 1,
"totalResults": 2}
Create custom role
Endpoint: <host-url>/scim/Roles
Method: POST
Description: Create a new custom role in the W&B organization.
Supported Fields:
Field
Type
Required
name
String
Name of the custom role
description
String
Description of the custom role
permissions
Object array
Array of permission objects where each object includes a name string field that has value of the form w&bobject:operation. For example, a permission object for delete operation on W&B runs would have name as run:delete.
inheritedFrom
String
The predefined role which the custom role would inherit from. It can either be member or viewer.
Request Example:
POST /scim/Roles
{
"schemas": ["urn:ietf:params:scim:schemas:core:2.0:Role"],
"name": "Sample custom role",
"description": "A sample custom role for example",
"permissions": [
{
"name": "project:update" }
],
"inheritedFrom": "member"}
Response Example:
(Status 201)
{
"description": "A sample custom role for example",
"id": "Um9sZTo3",
"inheritedFrom": "member", // indicates the predefined role
"meta": {
"resourceType": "Role",
"created": "2023-11-20T23:10:14Z",
"lastModified": "2023-11-20T23:31:23Z",
"location": "Roles/Um9sZTo3" },
"name": "Sample custom role",
"organizationID": "T3JnYW5pemF0aW9uOjE0ODQ1OA==",
"permissions": [
{
"name": "artifact:read",
"isInherited": true// inherited from member predefined role
},
...... {
"name": "project:update",
"isInherited": false// custom permission added by admin
}
],
"schemas": [
"" ]
}
Delete custom role
Endpoint: <host-url>/scim/Roles/{id}
Method: DELETE
Description: Delete a custom role in the W&B organization. Use it with caution. The predefined role from which the custom role inherited is now assigned to all users that were assigned the custom role before the operation.
Request Example:
DELETE /scim/Roles/abc
Response Example:
(Status 204)
Update custom role permissions
Endpoint: <host-url>/scim/Roles/{id}
Method: PATCH
Description: Add or remove custom permissions in a custom role in the W&B organization.
Supported Fields:
Field
Type
Required
operations
Object array
Array of operation objects
op
String
Type of operation within the operation object. It can either be add or remove.
path
String
Static field in the operation object. Only value allowed is permissions.
value
Object array
Array of permission objects where each object includes a name string field that has value of the form w&bobject:operation. For example, a permission object for delete operation on W&B runs would have name as run:delete.
Request Example:
PATCH /scim/Roles/abc
{
"schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
"Operations": [
{
"op": "add", // indicates the type of operation, other possible value being `remove`
"path": "permissions",
"value": [
{
"name": "project:delete" }
]
}
]
}
Response Example:
(Status 200)
{
"description": "A sample custom role for example",
"id": "Um9sZTo3",
"inheritedFrom": "member", // indicates the predefined role
"meta": {
"resourceType": "Role",
"created": "2023-11-20T23:10:14Z",
"lastModified": "2023-11-20T23:31:23Z",
"location": "Roles/Um9sZTo3" },
"name": "Sample custom role",
"organizationID": "T3JnYW5pemF0aW9uOjE0ODQ1OA==",
"permissions": [
{
"name": "artifact:read",
"isInherited": true// inherited from member predefined role
},
...... {
"name": "project:update",
"isInherited": false// existing custom permission added by admin before the update
},
{
"name": "project:delete",
"isInherited": false// new custom permission added by admin as part of the update
}
],
"schemas": [
"" ]
}
Update custom role metadata
Endpoint: <host-url>/scim/Roles/{id}
Method: PUT
Description: Update the name, description or inherited role for a custom role in the W&B organization. This operation doesn’t affect any of the existing, that is, non-inherited custom permissions in the custom role.
Supported Fields:
Field
Type
Required
name
String
Name of the custom role
description
String
Description of the custom role
inheritedFrom
String
The predefined role which the custom role inherits from. It can either be member or viewer.
Request Example:
PUT /scim/Roles/abc
{
"schemas": ["urn:ietf:params:scim:schemas:core:2.0:Role"],
"name": "Sample custom role",
"description": "A sample custom role for example but now based on viewer",
"inheritedFrom": "viewer"}
Response Example:
(Status 200)
{
"description": "A sample custom role for example but now based on viewer", // changed the descripton per the request
"id": "Um9sZTo3",
"inheritedFrom": "viewer", // indicates the predefined role which is changed per the request
"meta": {
"resourceType": "Role",
"created": "2023-11-20T23:10:14Z",
"lastModified": "2023-11-20T23:31:23Z",
"location": "Roles/Um9sZTo3" },
"name": "Sample custom role",
"organizationID": "T3JnYW5pemF0aW9uOjE0ODQ1OA==",
"permissions": [
{
"name": "artifact:read",
"isInherited": true// inherited from viewer predefined role
},
...// Any permissions that are in member predefined role but not in viewer will not be inherited post the update
{
"name": "project:update",
"isInherited": false// custom permission added by admin
},
{
"name": "project:delete",
"isInherited": false// custom permission added by admin
}
],
"schemas": [
"" ]
}
Choose any of the following environment variables for your instance depending on your IAM needs.
Environment variable
Description
DISABLE_SSO_PROVISIONING
Set this to true to turn off user auto-provisioning in your W&B instance.
SESSION_LENGTH
If you would like to change the default user session expiry time, set this variable to the desired number of hours. For example, set SESSION_LENGTH to 24 to configure session expiry time to 24 hours. The default value is 720 hours.
GORILLA_ENABLE_SSO_GROUP_CLAIMS
If you are using OIDC based SSO, set this variable to true to automate W&B team membership in your instance based on your OIDC groups. Add a groups claim to user OIDC token. It should be a string array where each entry is the name of a W&B team that the user should belong to. The array should include all the teams that a user is a part of.
GORILLA_LDAP_GROUP_SYNC
If you are using LDAP based SSO, set it to true to automate W&B team membership in your instance based on your LDAP groups.
GORILLA_OIDC_CUSTOM_SCOPES
If you are using OIDC based SSO, you can specify additional scopes that W&B instance should request from your identity provider. W&B does not change the SSO functionality due to these custom scopes in any way.
GORILLA_USE_IDENTIFIER_CLAIMS
If you are using OIDC based SSO, set this variable to true to enforce username and full name of your users using specific OIDC claims from your identity provider. If set, ensure that you configure the enforced username and full name in the preferred_username and name OIDC claims respectively. Usernames can only contain alphanumeric characters along with underscores and hyphens as special characters.
GORILLA_DISABLE_PERSONAL_ENTITY
Set this to true to turn off personal user projects in your W&B instance. If set, users can not create new personal projects in their personal entities, plus writes to existing personal projects are turned off.
GORILLA_DISABLE_ADMIN_TEAM_ACCESS
Set this to true to restrict Organization or Instance Admins from self-joining or adding themselves to a W&B team, thus ensuring that only Data & AI personas have access to the projects within the teams.
W&B advises to exercise caution and understand all implications before enabling some of these settings, like GORILLA_DISABLE_ADMIN_TEAM_ACCESS. Reach out to your W&B team for any questions.
5.3 - Data security
5.3.1 - Bring your own bucket (BYOB)
Bring your own bucket (BYOB) allows you to store W&B artifacts and other related sensitive data in your own cloud or on-prem infrastructure. In case of Dedicated cloud or SaaS Cloud, data that you store in your bucket is not copied to the W&B managed infrastructure.
Communication between W&B SDK / CLI / UI and your buckets occurs using pre-signed URLs.
W&B uses a garbage collection process to delete W&B Artifacts. For more information, see Deleting Artifacts.
You can specify a sub-path when configuring a bucket, to ensure that W&B does not store any files in a folder at the root of the bucket. It can help you better conform to your organzation’s bucket governance policy.
Configuration options
There are two scopes you can configure your storage bucket to: at the Instance level or at a Team level.
Instance level: Any user that has relevant permissions within your organization can access files stored in your instance level storage bucket.
Team level: Members of a W&B Team can access files stored in the bucket configured at the Team level. Team level storage buckets allow greater data access control and data isolation for teams with highly sensitive data or strict compliance requirements.
You can configure your bucket at both the instance level and separately for one or more teams within your organization.
For example, suppose you have a team called Kappa in your organization. Your organization (and Team Kappa) use the Instance level storage bucket by default. Next, you create a team called Omega. When you create Team Omega, you configure a Team level storage bucket for that team. Files generated by Team Omega are not accessible by Team Kappa. However, files created by Team Kappa are accessible by Team Omega. If you want to isolate data for Team Kappa, you must configure a Team level storage bucket for them as well.
Team level storage bucket provides the same benefits for Self-managed instances, especially when different business units and departments share an instance to efficiently utilize the infrastructure and administrative resources. This also applies to firms that have separate project teams managing AI workflows for separate customer engagements.
Availability matrix
The following table shows the availability of BYOB across different W&B Server deployment types. An X means the feature is available on the specific deployment type.
W&B Server deployment type
Instance level
Team level
Additional information
Dedicated cloud
X
X
Both the instance and team level BYOB are available for Amazon Web Services, Google Cloud Platform and Microsoft Azure. For the team-level BYOB, you can connect to a cloud-native storage bucket in the same or another cloud, or even a S3-compatible secure storage like MinIO hosted in your cloud or on-prem infrastructure.
SaaS Cloud
Not Applicable
X
The team level BYOB is available only for Amazon Web Services and Google Cloud Platform. W&B fully manages the default and only storage bucket for Microsoft Azure.
Self-managed
X
X
Instance level BYOB is the default since the instance is fully managed by you. If your self-managed instance is in cloud, you can connect to a cloud-native storage bucket in the same or another cloud for the team-level BYOB. You can also use S3-compatible secure storage like MinIO for either of instance or team-level BYOB.
Once you configure a instance or team level storage bucket for your Dedicated cloud or Self-managed instance, or a team level storage bucket for your SaaS Cloud account, you can not change or reconfigure the storage bucket for any of those scopes. That includes the inability to migrate data to another bucket and remap relevant references in the main product storage. W&B recommends to plan your storage bucket layout carefully before configuring for either of the instance or team level scopes. Reach out to your W&B team for any questions.
Cross-cloud or S3-compatible storage for team-level BYOB
You can connect to a cloud-native storage bucket in another cloud or to an S3-compatible storage bucket like MinIO for team-level BYOB in your Dedicated cloud or Self-managed instance.
To enable the use of cross-cloud or S3-compatible storage, specify the storage bucket including the relevant access key in one of the following formats, using the GORILLA_SUPPORTED_FILE_STORES environment variable for your W&B instance.
Configure an S3-compatible storage for team-level BYOB in Dedicated cloud or Self-managed instance
The region parameter is mandatory, except for when your W&B instance is in AWS and the AWS_REGION configured on the W&B instance nodes matches the region configured for the S3-compatible storage.
Configure a cross-cloud native storage for team-level BYOB in Dedicated cloud or Self-managed instance
Specify the path in a format specific to the locations of your W&B instance and storage bucket:
From W&B instance in GCP or Azure to a bucket in AWS:
Connectivity to S3-compatible storage for team-level BYOB is not available in SaaS Cloud. Also, connectivity to an AWS bucket for team-level BYOB is cross-cloud in SaaS Cloud, as that instance resides in GCP. That cross-cloud connectivity doesn’t use the access key and environment variable based mechanism as outlined previously for Dedicated cloud and Self-managed instances.
Based on your use case, configure a storage bucket at the team or instance level. How a storage bucket is provisioned or configured is the same irrespective of the level it’s configured at, except for the access mechanism in Azure.
W&B recommends that you use a Terraform module managed by W&B to provision a storage bucket along with the necessary access mechanism and related IAM permissions:
W&B requires you to provision a KMS Key to encrypt and decrypt the data on the S3 bucket. The key usage type must be ENCRYPT_DECRYPT. Assign the following policy to the key:
This policy grants your AWS account full access to the key and also assigns the required permissions to the AWS account hosting the W&B Platform. Keep a record of the KMS Key ARN.
Provision the S3 Bucket
Follow these steps to provision the S3 bucket in your AWS account:
Create the S3 bucket with a name of your choice. Optionally create a folder which you can configure as sub-path to store all W&B files.
Enable bucket versioning.
Enable server side encryption, using the KMS key from the previous step.
Grant the required S3 permissions to the AWS account hosting the W&B Platform, which requires these permissions to generate pre-signed URLs that AI workloads in your cloud infrastructure or user browsers utilize to access the bucket.
Replace <wandb_bucket> accordingly and keep a record of the bucket name. If you are using Dedicated cloud, share the bucket name with your W&B team in case of instance level BYOB. In case of team level BYOB on any deployment type, configure the bucket while creating the team.
If you are using SaaS Cloud or Dedicated cloud, replace <aws_principal_and_role_arn> with the corresponding value.
For SaaS Cloud: arn:aws:iam::725579432336:role/WandbIntegration
Replace <bucket_name> with the correct bucket name and run gsutil.
gsutil cors set cors-policy.json gs://<bucket_name>
Verify the bucket’s policy. Replace <bucket_name> with the correct bucket name.
gsutil cors get gs://<bucket_name>
If you are using SaaS Cloud or Dedicated cloud, grant the Storage Admin role to the GCP service account linked to the W&B Platform:
For SaaS Cloud, the account is: wandb-integration@wandb-production.iam.gserviceaccount.com
For Dedicated cloud the account is: deploy@wandb-production.iam.gserviceaccount.com
Keep a record of the bucket name. If you are using Dedicated cloud, share the bucket name with your W&B team in case of instance level BYOB. In case of team level BYOB on any deployment type, configure the bucket while creating the team.
Provision the Azure Blob Storage
For the instance level BYOB, if you’re not using this Terraform module, follow the steps below to provision a Azure Blob Storage bucket in your Azure subscription:
Create a bucket with a name of your choice. Optionally create a folder which you can configure as sub-path to store all W&B files.
Enable blob and container soft deletion.
Enable versioning.
Configure the CORS policy on the bucket
To set the CORS policy through the UI go to the blob storage, scroll down to Settings/Resource Sharing (CORS) and then set the following:
Parameter
Value
Allowed Origins
*
Allowed Methods
GET, HEAD, PUT
Allowed Headers
*
Exposed Headers
*
Max Age
3600
Generate a storage account access key, and keep a record of that along with the storage account name. If you are using Dedicated cloud, share the storage account name and access key with your W&B team using a secure sharing mechanism.
For the team level BYOB, W&B recommends that you use Terraform to provision the Azure Blob Storage bucket along with the necessary access mechanism and permissions. If you use Dedicated cloud, provide the OIDC issuer URL for your instance. Make a note of details that you need to configure the bucket while creating the team:
Storage account name
Storage container name
Managed identity client id
Azure tenant id
Configure BYOB in W&B
If you’re connecting to a cloud-native storage bucket in another cloud or to an S3-compatible storage bucket like MinIO for team-level BYOB in your Dedicated cloud or Self-managed instance, refer to Cross-cloud or S3-compatible storage for team-level BYOB. In such cases, you must specify the storage bucket using the GORILLA_SUPPORTED_FILE_STORES environment variable for your W&B instance, before you configure it for a team using the instructions below.
To configure a storage bucket at the team level when you create a W&B Team:
Provide a name for your team in the Team Name field.
Select External storage for the Storage type option.
Choose either New bucket from the dropdown or select an existing bucket.
Multiple W&B Teams can use the same cloud storage bucket. To enable this, select an existing cloud storage bucket from the dropdown.
From the Cloud provider dropdown, select your cloud provider.
Provide the name of your storage bucket for the Name field. If you have a Dedicated cloud or Self-managed instance on Azure, provide the values for Account name and Container name fields.
(Optional) Provide the bucket sub-path in the optional Path field. Do this if you would not like W&B to store any files in a folder at the root of the bucket.
(Optional if using AWS bucket) Provide the ARN of your KMS encryption key for the KMS key ARN field.
(Optional if using Azure bucket) Provide the values for the Tenant ID and the Managed Identity Client ID fields.
(Optional on SaaS Cloud) Optionally invite team members when creating the team.
Press the Create Team button.
An error or warning appears at the bottom of the page if there are issues accessing the bucket or the bucket has invalid settings.
Reach out to W&B Support at support@wandb.com to configure instance level BYOB for your Dedicated cloud or Self-managed instance.
When needed, AI workloads or user browser clients within your network request pre-signed URLs from the W&B Platform. W&B Platform then access the relevant blob storage to generate the pre-signed URL with required permissions, and returns it back to the client. The client then uses the pre-signed URL to access the blob storage for object upload or retrieval operations. URL expiry time for object downloads is 1 hour, and it is 24 hours for object uploads as some large objects may need more time to upload in chunks.
Team-level access control
Each pre-signed URL is restricted to specific buckets based on team level access control in the W&B platform. If a user is part of a team which is mapped to a blob storage bucket using secure storage connector, and if that user is part of only that team, then the pre-signed URLs generated for their requests would not have permissions to access blob storage buckets mapped to other teams.
W&B recommends adding users to only the teams that they are supposed to be a part of.
Network restriction
W&B recommends restricting the networks that can use pre-signed URLs to access the blob storage, by using IAM policy based restrictions on the buckets.
In case of AWS, one can use VPC or IP address based network restriction. It ensures that your W&B specific buckets are accessed only from networks where your AI workloads are running, or from gateway IP addresses that map to your user machines if your users access artifacts using the W&B UI.
Audit logs
W&B also recommends to use W&B audit logs in addition to blob storage specific audit logs. For latter, refer to AWS S3 access logs,Google Cloud Storage audit logs and Monitor Azure blob storage. Admin and security teams can use audit logs to keep track of which user is doing what in the W&B product and take necessary action if they determine that some operations need to be limited for certain users.
Pre-signed URLs are the only supported blob storage access mechanism in W&B. W&B recommends configuring some or all of the above list of security controls depending on your risk appetite.
5.3.3 - Configure IP allowlisting for Dedicated Cloud
You can restrict access to your Dedicated Cloud instance from only an authorized list of IP addresses. This applies to the access from your AI workloads to the W&B APIs and from your user browsers to the W&B app UI as well. Once IP allowlisting has been set up for your Dedicated Cloud instance, W&B denies any requests from other unauthorized locations. Reach out to your W&B team to configure IP allowlisting for your Dedicated Cloud instance.
IP allowlisting is available on Dedicated Cloud instances on AWS, GCP and Azure.
You can use IP allowlisting with secure private connectivity. If you use IP allowlisting with secure private connectivity, W&B recommends using secure private connectivity for all traffic from your AI workloads and majority of the traffic from your user browsers if possible, while using IP allowlisting for instance administration from privileged locations.
W&B strongly recommends to use CIDR blocks assigned to your corporate or business egress gateways rather than individual /32 IP addresses. Using individual IP addresses is not scalable and has strict limits per cloud.
5.3.4 - Configure private connectivity to Dedicated Cloud
You can connect to your Dedicated Cloud instance over the cloud provider’s secure private network. This applies to the access from your AI workloads to the W&B APIs and optionally from your user browsers to the W&B app UI as well. When using private connectivity, the relevant requests and responses do not transit through the public network or internet.
Secure private connectivity is coming soon as an advanced security option with Dedicated Cloud.
Secure private connectivity is available on Dedicated Cloud instances on AWS, GCP and Azure:
Once enabled, W&B creates a private endpoint service for your instance and provides you the relevant DNS URI to connect to. With that, you can create private endpoints in your cloud accounts that can route the relevant traffic to the private endpoint service. Private endpoints are easier to setup for your AI training workloads running within your cloud VPC or VNet. To use the same mechanism for traffic from your user browsers to the W&B app UI, you must configure appropriate DNS based routing from your corporate network to the private endpoints in your cloud accounts.
If you would like to use this feature, contact your W&B team.
You can use secure private connectivity with IP allowlisting. If you use secure private connectivity for IP allowlisting, W&B recommends that you secure private connectivity for all traffic from your AI workloads and majority of the traffic from your user browsers if possible, while using IP allowlisting for instance administration from privileged locations.
5.3.5 - Data encryption in Dedicated cloud
W&B uses a W&B-managed cloud-native key to encrypt the W&B-managed database and object storage in every Dedicated cloud, by using the customer-managed encryption key (CMEK) capability in each cloud. In this case, W&B acts as a customer of the cloud provider, while providing the W&B platform as a service to you. Using a W&B-managed key means that W&B has control over the keys that it uses to encrypt the data in each cloud, thus doubling down on its promise to provide a highly safe and secure platform to all of its customers.
W&B uses a unique key to encrypt the data in each customer instance, providing another layer of isolation between Dedicated cloud tenants. The capability is available on AWS, Azure and GCP.
Dedicated cloud instances on GCP and Azure that W&B provisioned before August 2024 use the default cloud provider managed key for encrypting the W&B-managed database and object storage. Only new instances that W&B has been creating starting August 2024 use the W&B-managed cloud-native key for the relevant encryption.
Dedicated cloud instances on AWS have been using the W&B-managed cloud-native key for encryption from before August 2024.
W&B doesn’t generally allow customers to bring their own cloud-native key to encrypt the W&B-managed database and object storage in their Dedicated cloud instance, because multiple teams and personas in an organization could have access to its cloud infrastructure for various reasons. Some of those teams or personas may not have context on W&B as a critical component in the organization’s technology stack, and thus may remove the cloud-native key completely or revoke W&B’s access to it. Such an action could corrupt all data in the organization’s W&B instance and thus leave it in a irrecoverable state.
If your organization needs to use their own cloud-native key to encrypt the W&B-managed database and object storage to approve the use of Dedicated cloud for your AI workflows, W&B can review it on a exception basis. If approved, use of your cloud-native key for encryption would conform to the shared responsibility model of W&B Dedicated cloud. If any user in your organization removes your key or revokes W&B’s access to it at any point when your Dedicated cloud instance is live, W&B would not be liable for any resulting data loss or corruption and also would not be responsible for recovery of such data.
5.4 - Configure privacy settings
Organization and Team admins can configure a set of privacy settings at the organization and team scopes respectively. When configured at the organization scope, organization admins enforce those settings for all teams in that organization.
W&B recommends organization admins to enforce a privacy setting only after communicating that in advance to all team admins and users in their organization. This is to avoid unexpected changes in their workflows.
Configure privacy settings for a team
Team admins can configure privacy settings for their respective teams from within the Privacy section of the team Settings tab. Each setting is configurable as long as it’s not enforced at the organization scope:
Hide this team from all non-members
Make all future team projects private (public sharing not allowed)
Allow any team member to invite other members (not just admins)
Turn off public sharing to outside of team for reports in private projects. This turns off existing magic links.
Allow users with matching organization email domain to join this team.
Organization admins can enforce privacy settings for all teams in their organization from within the Privacy section of the Settings tab in the account or organization dashboard. If organization admins enforce a setting, team admins are not allowed to configure that within their respective teams.
Enforce team visibility restrictions
Enable this option to hide all teams from non-members
Enforce privacy for future projects
Enable this option to enforce all future projects in all teams to be private or restricted
Enforce invitation control
Enable this option to prevent non-admins from inviting members to any team
Enforce report sharing control
Enable this option to turn off public sharing of reports in private projects and deactivate existing magic links
Enforce team self joining restrictions
Enable this option to restrict users with matching organization email domain from self-joining any team
Enable this option to turn off code saving by default for all teams
5.5 - Monitoring and usage
5.5.1 - Track user activity with audit logs
Use W&B audit logs to track user activity within your organization and to conform to your enterprise governance requirements. Audit logs are available in JSON format. How to access audit logs depends on your W&B platform deployment type:
Once you’ve access to your audit logs, analyze those using your preferred tools, such as Pandas, Amazon Redshift, Google BigQuery, Microsoft Fabric, and more. You may need to transform the JSON-formatted audit logs into a format relevant to the tool before analysis. Information on how to transform your audit logs for specific tools is outside the scope of W&B documentation.
Audit Log Retention: If a compliance, security or risk team in your organization requires audit logs to be retained for a specific period of time, W&B recommends to periodically transfer the logs from your instance-level bucket to a long-term retention storage. If you’re instead using the API to access the audit logs, you can implement a simple script that runs periodically (like daily or every few days) to fetch any logs that may have been generated since the time of the last script run, and store those in a short-term storage for analysis or directly transfer to a long-term retention storage.
HIPAA compliance requires that you retain audit logs for a minimum of 6 years. For HIPAA-compliant Dedicated Cloud instances with BYOB, you must configure guardrails for your managed storage including any long-term retention storage, to ensure that no internal or external user can delete audit logs before the end of the mandatory retention period.
Audit log schema
The following table lists all the different keys that might be present in your audit logs. Each log contains only the assets relevant to the corresponding action, and others are omitted from the log.
Key
Definition
timestamp
Time stamp in RFC3339 format. For example: 2023-01-23T12:34:56Z, represents 12:34:56 UTC time on Jan 23, 2023.
If present, ID of the logged-in user who performed the action.
response_code
Http response code for the action.
artifact_asset
If present, action was taken on this artifact id
artifact_sequence_asset
If present, action was taken on this artifact sequence id
entity_asset
If present, action was taken on this entity or team id.
project_asset
If present, action was taken on this project id.
report_asset
If present, action was taken on this report id.
user_asset
If present, action was taken on this user asset.
cli_version
If the action is taken via python SDK, this will contain the version
actor_ip
IP address of the logged-in user.
actor_email
if present, action was taken on this actor email.
artifact_digest
if present, action was taken on this artifact digest.
artifact_qualified_name
if present, action was taken on this artifact.
entity_name
if present, action was taken on this entity or team name.
project_name
if present, action was taken on this project name.
report_name
if present, action was taken on this report name.
user_email
if present, action was taken on this user email.
Personally identifiable information (PII), such as email ids and the names of projects, teams, and reports, is available only using the API endpoint option, and can be turned off as described below.
Fetch audit logs using API
An instance admin can fetch the audit logs for your W&B instance using the following API:
Construct the full API endpoint using a combination of the base endpoint <wandb-platform-url>/admin/audit_logs and the following URL parameters:
numDays: logs will be fetched starting from today - numdays to most recent; defaults to 0, which returns logs only for today.
anonymize: if set to true, remove any PII; defaults to false
Execute HTTP GET request on the constructed full API endpoint, either by directly running it within a modern browser, or by using a tool like Postman, HTTPie, cURL command or more.
If your W&B instance URL is https://mycompany.wandb.io and you would like to get audit logs without PII for user activity within the last week, you must use the API endpoint https://mycompany.wandb.io?numDays=7&anonymize=true.
Only W&B instance admins can fetch audit logs using the API. If you are not an instance admin or not logged into your organization, you get a HTTP 403 Forbidden error.
The API response contains new-line separated JSON objects. Objects will include the fields described in the schema. It’s the same format which is used when syncing audit log files to an instance-level bucket (wherever applicable as mentioned earlier). In those cases, the audit logs are located at the /wandb-audit-logs directory in your bucket.
Actions
The following table describes possible actions that can be recorded by W&B:
Provide a name for your app in the App Name field.
Select a Slack workspace where you want to develop your app in. Ensure that the Slack workspace you use is the same workspace you intend to use for alerts.
Configure the Slack application
On the left sidebar, select OAth & Permissions.
Within the Scopes section, provide the bot with the incoming_webhook scope. Scopes give your app permission to perform actions in your development workspace.
For more information about OAuth scopes for Bots, see the Understanding OAuth scopes for Bots tutorial in the Slack API documentation.
Configure the Redirect URL to point to your W&B installation. Use the same URL that your host URL is set to in your local system settings. You can specify multiple URLs if you have different DNS mappings to your instance.
Select Save URLs.
You can optionally specify an IP range under Restrict API Token Usage, allow-list the IP or IP range of your W&B instances. Limiting the allowed IP address helps further secure your Slack application.
Register your Slack application with W&B
Navigate to the System Settings or System Console page of your W&B instance, depending on your deployment
Depending on the System page you are on follow one of the below options:
If you are in the System Console: go to Settings then to Notifications
If you are in the System Settings: toggle the Enable a custom Slack application to dispatch alerts to enable a custom Slack application
Supply your Slack client ID and Slack secret then click Save. Navigate to Basic Information in Settings to find your application’s client ID and secret.
Verify that everything is working by setting up a Slack integration in the W&B app.
Use the organization dashboard to get a holistic view of users that belong to your organization, how users of your organization use W&B, along with properties such as:
Name: The name of the user and their W&B username.
Last active: The time the user last used W&B. This includes any activity that requires authentication, including viewing pages in the product, logging runs or taking any other action, or logging in.
Role: The role of the user.
Email: The email of the user.
Team: The names of teams the user belongs to.
View the status of a user
The Last Active column shows if a user is pending an invitation or an active user. A user is one of three states:
Invite pending: Admin has sent invite but user has not accepted invitation.
Active: User has accepted the invite and created an account.
Deactivated: Admin has revoked access of the user.
View and share how your organization uses W&B
View how your organization uses W&B in CSV format.
Select the three dots next to the Add user button.
From the dropdown, select Export as CSV.
This exports a CSV file that lists all users of an organization along with details about the user, such as their user name, time stamp of when they were last active, roles, email, and more.
View user activity
Use the Last Active column to get an Activity summary of an individual user.
Hover your mouse over the Last Active entry for a user.
A tooltip appears and provides a summary of information about the user’s activity.
A user is active if they:
log in to W&B.
view any page in the W&B App.
log runs.
use the SDK to track an experiment.
interact with the W&B Server in any way.
View active users over time
Use the Users active over time plot in the Organization dashboard to get an aggregate overview of how many users are active over time (right most plot in image below).
You can use the dropdown menu to filter results based on days, months, or all time.
5.6 - Configure SMTP
In W&B server, adding users to the instance or team will trigger an email invite. To send these email invites, W&B uses a third-party mail server. In some cases, organizations might have strict policies on traffic leaving the corporate network and hence causing these email invites to never be sent to the end user. W&B server offers an option to configure sending these invite emails via an internal SMTP server.
To configure, follow the steps below:
Set the GORILLA_EMAIL_SINK environment variable in the docker container or the kubernetes deployment to smtp://<user:password>@smtp.host.com:<port>
username and password are optional
If you’re using an SMTP server that’s designed to be unauthenticated you would just set the value for the environment variable like GORILLA_EMAIL_SINK=smtp://smtp.host.com:<port>
Commonly used port numbers for SMTP are ports 587, 465 and 25. Note that this might differ based on the type of the mail server you’re using.
To configure the default sender email address for SMTP, which is initially set to noreply@wandb.com, you can update it to an email address of your choice. This can be done by setting the GORILLA_EMAIL_FROM_ADDRESS environment variable on the server to your desired sender email address.
5.7 - Configure environment variables
How to configure the W&B Server installation
In addition to configuring instance level settings via the System Settings admin UI, W&B also provides a way to configure these values via code using Environment Variables. Also, refer to advanced configuration for IAM.
Environment variable reference
Environment Variable
Description
LICENSE
Your wandb/local license
MYSQL
The MySQL connection string
BUCKET
The S3 / GCS bucket for storing data
BUCKET_QUEUE
The SQS / Google PubSub queue for object creation events
NOTIFICATIONS_QUEUE
The SQS queue on which to publish run events
AWS_REGION
The AWS Region where your bucket lives
HOST
The FQD of your instance, that is https://my.domain.net
OIDC_ISSUER
A URL to your Open ID Connect identity provider, that is https://cognito-idp.us-east-1.amazonaws.com/us-east-1_uiIFNdacd
OIDC_CLIENT_ID
The Client ID of application in your identity provider
OIDC_AUTH_METHOD
Implicit (default) or pkce, see below for more context
SLACK_CLIENT_ID
The client ID of the Slack application you want to use for alerts
SLACK_SECRET
The secret of the Slack application you want to use for alerts
LOCAL_RESTORE
You can temporarily set this to true if you’re unable to access your instance. Check the logs from the container for temporary credentials.
REDIS
Can be used to setup an external REDIS instance with W&B.
LOGGING_ENABLED
When set to true, access logs are streamed to stdout. You can also mount a sidecar container and tail /var/log/gorilla.log without setting this variable.
GORILLA_ALLOW_USER_TEAM_CREATION
When set to true, allows non-admin users to create a new team. False by default.
GORILLA_DATA_RETENTION_PERIOD
How long to retain deleted data from runs in hours. Deleted run data is unrecoverable. Append an h to the input value. For example, "24h".
ENABLE_REGISTRY_UI
When set to true, enables the new W&B Registry UI.
Use the GORILLA_DATA_RETENTION_PERIOD environment variable cautiously. Data is removed immediately once the environment variable is set. We also recommend that you backup both the database and the storage bucket before you enable this flag.
Advanced Reliability Settings
Redis
Configuring an external Redis server is optional but recommended for production systems. Redis helps improve the reliability of the service and enable caching to decrease load times, especially in large projects. Use a managed Redis service such ElastiCache with high availability (HA) and the following specifications:
Minimum 4GB of memory, suggested 8GB
Redis version 6.x
In transit encryption
Authentication enabled
To configure the Redis instance with W&B, you can navigate to the W&B settings page at http(s)://YOUR-W&B-SERVER-HOST/system-admin. Enable the “Use an external Redis instance” option, and fill in the Redis connection string in the following format:
You can also configure Redis using the environment variable REDIS on the container or in your Kubernetes deployment. Alternatively, you could also setup REDIS as a Kubernetes secret.
This page assumes the Redis instance is running at the default port of 6379. If you configure a different port, setup authentication and also want to have TLS enabled on the redis instance the connection string format would look something like: redis://$USER:$PASSWORD@$HOST:$PORT?tls=true
5.8 - Release process for W&B Server
Release process for W&B Server
Frequency and deployment types
W&B Server releases apply to the Dedicated Cloud and Self-managed deployments. There are three kinds of server releases:
Release type
Description
Monthly
Monthly releases include new features, enhancements, plus medium and low severity bug fixes.
Patch
Patch releases include critical and high severity bug fixes. Patches are only rarely released, as needed.
Feature
The feature release targets a specific release date for a new product feature, which occasionally happens before the standard monthly release.
All releases are immediately deployed to all Dedicated Cloud instances once the acceptance testing phase is complete. It keeps those managed instances fully updated, making the latest features and fixes available to relevant customers. Customers with Self-managed instances are responsible for the update process on their own schedule, where they can use the latest Docker image. Refer to release support and end of life.
Some advanced features are available only with the enterprise license. So even if you get the latest Docker image but don’t have an enterprise license, you would not be able to take advantage of the relevant advanced capabilities.
Some new features start in private preview, which means they are only available to design partners or early adopters. The W&B team must enable a private preview feature before you can use it in your instance.
Release notes
The release notes for all releases are available at W&B Server Releases on GitHub. Customers who use Slack can receive automatic release announcements in their W&B Slack channel. Ask your W&B team to enable these updates.
Release update and downtime
A server release does not generally require instance downtime for Dedicated Cloud instances and for customers with Self-managed deployments who have implemented a proper rolling update process.
Downtime might occur for the following scenarios:
A new feature or enhancement requires changes to the underlying infrastructure such as compute, storage or network. W&B tries to send relevant advance notifications to Dedicated Cloud customers.
An infrastructure change due to a security patch or to avoid support end-of-life for a particular version. For urgent changes, Dedicated Cloud customers may not receive advance notifications. The priority here is to keep the fleet secure and fully supported.
For both cases, updates roll out to all Dedicated Cloud instances without exception. Customers with Self-managed instances are responsible to manage such updates on their own schedule. Refer to release support and end of life.
Release support and end of life policy
W&B supports every server release for six months from the release date. Dedicated Cloud instances are automatically updated. Customers with Self-managed instances are responsible to update their deployments in time to comply with the support policy. Avoid staying on a version older than six months as it would significantly limit support from W&B.
W&B strongly recommends customers with Self-managed instances to update their deployments with the latest release at least every quarter. This ensures that you are using the latest and greatest capabilities while also keeping well ahead of the release end of life.
6 - Integrations
W&B integrations make it fast and easy to set up experiment tracking and data versioning inside existing projects. Check out integrations for ML frameworks such as PyTorch, ML libraries such as Hugging Face, or cloud services such as Amazon SageMaker.
Related resources
Examples: Try the code with notebook and script examples for each integration.
Video Tutorials: Learn to use W&B with YouTube video tutorials
6.1 - Add wandb to any library
Add wandb to any library
This guide provides best practices on how to integrate W&B into your Python library to get powerful Experiment Tracking, GPU and System Monitoring, Model Checkpointing, and more for your own library.
If you are still learning how to use W&B, we recommend exploring the other W&B Guides in these docs, such as Experiment Tracking, before reading further.
Below we cover best tips and best practices when the codebase you are working on is more complicated than a single Python training script or Jupyter notebook. The topics covered are:
Setup requirements
User Login
Starting a wandb Run
Defining a Run Config
Logging to W&B
Distributed Training
Model Checkpointing and More
Hyper-parameter tuning
Advanced Integrations
Setup requirements
Before you get started, decide whether or not to require W&B in your library’s dependencies:
Require W&B On Installation
Add the W&B Python library (wandb) to your dependencies file, for example, in your requirements.txt file:
torch==1.8.0...wandb==0.13.*
Make W&B optional On Installation
There are two ways to make the W&B SDK (wandb) optional:
A. Raise an error when a user tries to use wandb functionality without installing it manually and show an appropriate error message:
try:
import wandb
exceptImportError:
raiseImportError(
"You are trying to use wandb which is not currently installed.""Please install it using pip install wandb" )
B. Add wandb as an optional dependency to your pyproject.toml file, if you are building a Python package:
[project]
name = "my_awesome_lib"version = "0.1.0"dependencies = [
"torch",
"sklearn"]
[project.optional-dependencies]
dev = [
"wandb"]
User Login
There are a few ways for your users to log in to W&B:
Log into W&B with a bash command in a terminal:
wandb login $MY_WANDB_KEY
If they’re in a Jupyter or Colab notebook, log into W&B like so:
If a user is using wandb for the first time without following any of the steps mentioned above, they will automatically be prompted to log in when your script calls wandb.init.
Starting A wandb Run
A W&B Run is a unit of computation logged by W&B. Typically, you associate a single W&B Run per training experiment.
Initialize W&B and start a Run within your code with:
wandb.init()
Optionally, you can provide a name for their project, or let the user set it themselves with parameters such as wandb_project in your code along with the username or team name, such as wandb_entity, for the entity parameter:
Your library should create W&B Run as early as possible because any output in your console, including error messages, is logged as part of the W&B Run. This makes debugging easier.
Run The Library With wandb As Optional
If you want to make wandb optional when your users use your library, you can either:
Define a wandb flag such as:
trainer = my_trainer(..., use_wandb=True)
python train.py ... --use-wandb
Or, set wandb to be disabled in wandb.init:
wandb.init(mode="disabled")
export WANDB_MODE=disabled
or
wandb disabled
Or, set wandb to be offline - note this will still run wandb, it just won’t try and communicate back to W&B over the internet:
export WANDB_MODE=offline
or
os.environ['WANDB_MODE'] ='offline'
wandb offline
Defining A wandb Run Config
With a wandb run config, you can provide metadata about your model, dataset, and so on when you create a W&B Run. You can use this information to compare different experiments and quickly understand the main differences.
Typical config parameters you can log include:
Model name, version, architecture parameters, etc.
Dataset name, version, number of train/val examples, etc.
Training parameters such as learning rate, batch size, optimizer, etc.
The following code snippet shows how to log a config:
Use wandb.config.update to update the config. Updating your configuration dictionary is useful when parameters are obtained after the dictionary was defined. For example, you might want to add a model’s parameters after the model is instantiated.
Create a dictionary where the key value is the name of the metric. Pass this dictionary object to wandb.log:
for epoch in range(NUM_EPOCHS):
for input, ground_truth in data:
prediction = model(input)
loss = loss_fn(prediction, ground_truth)
metrics = { "loss": loss }
wandb.log(metrics)
If you have a lot of metrics, you can have them automatically grouped in the UI by using prefixes in the metric name, such as train/... and val/.... This will create separate sections in your W&B Workspace for your training and validation metrics, or other metric types you’d like to separate:
Sometimes you might need to perform multiple calls to wandb.log for the same training step. The wandb SDK has its own internal step counter that is incremented every time a wandb.log call is made. This means that there is a possibility that the wandb log counter is not aligned with the training step in your training loop.
To avoid this, we recommend that you specifically define your x-axis step. You can define the x-axis with wandb.define_metric and you only need to do this once, after wandb.init is called:
The glob pattern, *, means that every metric will use global_step as the x-axis in your charts. If you only want certain metrics to be logged against global_step, you can specify them instead:
Now that you’ve called wandb.define_metric, you just need to log your metrics as well as your step_metric, global_step, every time you call wandb.log:
for step, (input, ground_truth) in enumerate(data):
... wandb.log({"global_step": step, "train/loss": 0.1})
wandb.log({"global_step": step, "eval/loss": 0.2})
If you do not have access to the independent step variable, for example “global_step” is not available during your validation loop, the previously logged value for “global_step” is automatically used by wandb. In this case, ensure you log an initial value for the metric so it has been defined when it’s needed.
Log Images, Tables, Text, Audio and More
In addition to metrics, you can log plots, histograms, tables, text, and media such as images, videos, audios, 3D, and more.
Some considerations when logging data include:
How often should the metric be logged? Should it be optional?
What type of data could be helpful in visualizing?
For images, you can log sample predictions, segmentation masks, etc., to see the evolution over time.
For text, you can log tables of sample predictions for later exploration.
Refer to Log Data with wandb.log for a full guide on logging media, objects, plots, and more.
Distributed Training
For frameworks supporting distributed environments, you can adapt any of the following workflows:
Detect which is the “main” process and only use wandb there. Any required data coming from other processes must be routed to the main process first. (This workflow is encouraged).
Call wandb in every process and auto-group them by giving them all the same unique group name.
If your framework uses or produces models or datasets, you can log them for full traceability and have wandb automatically monitor your entire pipeline through W&B Artifacts.
When using Artifacts, it might be useful but not necessary to let your users define:
The ability to log model checkpoints or datasets (in case you want to make it optional).
The path/reference of the artifact being used as input, if any. For example, user/project/artifact.
The frequency for logging Artifacts.
Log Model Checkpoints
You can log Model Checkpoints to W&B. It is useful to leverage the unique wandb Run ID to name output Model Checkpoints to differentiate them between Runs. You can also add useful metadata. In addition, you can also add aliases to each model as shown below:
metadata = {"eval/accuracy": 0.8, "train/steps": 800}
artifact = wandb.Artifact(
name=f"model-{wandb.run.id}",
metadata=metadata,
type="model" )
artifact.add_dir("output_model") # local directory where the model weights are storedaliases = ["best", "epoch_10"]
wandb.log_artifact(artifact, aliases=aliases)
You can log output Artifacts at any frequency (for example, every epoch, every 500 steps, and so on) and they are automatically versioned.
Log And Track Pre-trained Models Or Datasets
You can log artifacts that are used as inputs to your training such as pre-trained models or datasets. The following snippet demonstrates how to log an Artifact and add it as an input to the ongoing Run as shown in the graph above.
Artifacts can be found in the Artifacts section of W&B and can be referenced with aliases generated automatically (latest, v2, v3) or manually when logging (best_accuracy, etc.).
To download an Artifact without creating a wandb run (through wandb.init), for example in distributed environments or for simple inference, you can instead reference the artifact with the wandb API:
Fine-tuning GPT-3.5 or GPT-4 models on Microsoft Azure using W&B tracks, analyzes, and improves model performance by automatically capturing metrics and facilitating systematic evaluation through W&B’s experiment tracking and evaluation tools.
How to integrate W&B for Catalyst, a Pytorch framework.
Catalyst is a PyTorch framework for deep learning R&D that focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new.
Catalyst includes a W&B integration for logging parameters, metrics, images, and other artifacts.
Run an example colab to see Catalyst and W&B integration in action.
6.4 - Cohere fine-tuning
How to Fine-Tune Cohere models using W&B.
With Weights & Biases you can log your Cohere model’s fine-tuning metrics and configuration to analyze and understand the performance of your models and share the results with your colleagues.
To add Cohere fine-tuning logging to your W&B workspace:
Create a WandbConfig with your W&B API key, W&B entity and project name. You can find your W&B API key at https://wandb.ai/authorize
Pass this config to the FinetunedModel object along with your model name, dataset and hyperparameters to kick off your fine-tuning run.
from cohere.finetuning import WandbConfig, FinetunedModel
# create a config with your W&B detailswandb_ft_config = WandbConfig(
api_key="<wandb_api_key>",
entity="my-entity", # must be a valid enitity associated with the provided API key project="cohere-ft",
)
...# set up your datasets and hyperparameters# start a fine-tuning run on coherecmd_r_finetune = co.finetuning.create_finetuned_model(
request=FinetunedModel(
name="command-r-ft",
settings=Settings(
base_model=... dataset_id=... hyperparameters=... wandb=wandb_ft_config # pass your W&B config here ),
),
)
View your model’s fine-tuning training and validation metrics and hyperparameters in the W&B project that you created.
Organize runs
Your W&B runs are automatically organized and can be filtered/sorted based on any configuration parameter such as job type, base model, learning rate and any other hyper-parameter.
In addition, you can rename your runs, add notes or create tags to group them.
W&B integrates with Databricks by customizing the W&B Jupyter notebook experience in the Databricks environment.
Configure Databricks
Install wandb in the cluster
Navigate to your cluster configuration, choose your cluster, click Libraries. Click Install New, choose PyPI, and add the package wandb.
Set up authentication
To authenticate your W&B account you can add a Databricks secret which your notebooks can query.
# install databricks clipip install databricks-cli
# Generate a token from databricks UIdatabricks configure --token
# Create a scope with one of the two commands (depending if you have security features enabled on databricks):# with security add-ondatabricks secrets create-scope --scope wandb
# without security add-ondatabricks secrets create-scope --scope wandb --initial-manage-principal users
# Add your api_key from: https://app.wandb.ai/authorizedatabricks secrets put --scope wandb --key api_key
DeepChecks helps you validate your machine learning models and data, such as verifying your data’s integrity, inspecting its distributions, validating data splits, evaluating your model and comparing between different models, all with minimal effort.
To use DeepChecks with Weights & Biases you will first need to sign up for a Weights & Biases account here. With the Weights & Biases integration in DeepChecks you can quickly get started like so:
import wandb
wandb.login()
# import your check from deepchecksfrom deepchecks.checks import ModelErrorAnalysis
# run your checkresult = ModelErrorAnalysis()
# push that result to wandbresult.to_wandb()
You can also log an entire DeepChecks test suite to Weights & Biases
import wandb
wandb.login()
# import your full_suite tests from deepchecksfrom deepchecks.suites import full_suite
# create and run a DeepChecks test suitesuite_result = full_suite().run(...)
# push thes results to wandb# here you can pass any wandb.init configs and arguments you needsuite_result.to_wandb(project="my-suite-project", config={"suite-name": "full-suite"})
Example
``This Report shows off the power of using DeepChecks and Weights & Biases
Any questions or issues about this Weights & Biases integration? Open an issue in the DeepChecks github repository and we’ll catch it and get you an answer :)
6.7 - DeepChem
How to integrate W&B with DeepChem library.
The DeepChem library provides open source tools that democratize the use of deep-learning in drug discovery, materials science, chemistry, and biology. This W&B integration adds simple and easy-to-use experiment tracking and model checkpointing while training models using DeepChem.
DeepChem logging in 3 lines of code
logger = WandbLogger(…)
model = TorchModel(…, wandb_logger=logger)
model.fit(…)
from deepchem.models import WandbLogger
logger = WandbLogger(entity="my_entity", project="my_project")
Log your training and evaluation data to W&B
Training loss and evaluation metrics can be automatically logged to Weights & Biases. Optional evaluation can be enabled using the DeepChem ValidationCallback, the WandbLogger will detect ValidationCallback callback and log the metrics generated.
```python
from deepchem.models import TorchModel, ValidationCallback
vc = ValidationCallback(…) # optional
model = TorchModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```
```python
from deepchem.models import KerasModel, ValidationCallback
vc = ValidationCallback(…) # optional
model = KerasModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```
6.8 - Docker
How to integrate W&B with Docker.
Docker Integration
W&B can store a pointer to the Docker image that your code ran in, giving you the ability to restore a previous experiment to the exact environment it was run in. The wandb library looks for the WANDB_DOCKER environment variable to persist this state. We provide a few helpers that automatically set this state.
Local Development
wandb docker is a command that starts a docker container, passes in wandb environment variables, mounts your code, and ensures wandb is installed. By default the command uses a docker image with TensorFlow, PyTorch, Keras, and Jupyter installed. You can use the same command to start your own docker image: wandb docker my/image:latest. The command mounts the current directory into the “/app” directory of the container, you can change this with the “–dir” flag.
Production
The wandb docker-run command is provided for production workloads. It’s meant to be a drop in replacement for nvidia-docker. It’s a simple wrapper to the docker run command that adds your credentials and the WANDB_DOCKER environment variable to the call. If you do not pass the “–runtime” flag and nvidia-docker is available on the machine, this also ensures the runtime is set to nvidia.
Kubernetes
If you run your training workloads in Kubernetes and the k8s API is exposed to your pod (which is the case by default). wandb will query the API for the digest of the docker image and automatically set the WANDB_DOCKER environment variable.
Restoring
If a run was instrumented with the WANDB_DOCKER environment variable, calling wandb restore username/project:run_id will checkout a new branch restoring your code then launch the exact docker image used for training pre-populated with the original command.
6.9 - Farama Gymnasium
How to integrate W&B with Farama Gymnasium.
If you’re using Farama Gymnasium we will automatically log videos of your environment generated by gymnasium.wrappers.Monitor. Just set the monitor_gym keyword argument to wandb.init to True.
Our gymnasium integration is very light. We simply look at the name of the video file being logged from gymnasium and name it after that or fall back to "videos" if we don’t find a match. If you want more control, you can always just manually log a video.
Check out this report to learn more on how to use Gymnasium with the CleanRL library.
6.10 - fastai
If you’re using fastai to train your models, W&B has an easy integration using the WandbCallback. Explore the details in interactive docs with examples →
Add the WandbCallback to the learner or fit method:
import wandb
from fastai.callback.wandb import*# start logging a wandb runwandb.init(project="my_project")
# To log only during one training phaselearn.fit(..., cbs=WandbCallback())
# To log continuously for all training phaseslearn = learner(..., cbs=WandbCallback())
If you use version 1 of Fastai, refer to the Fastai v1 docs.
WandbCallback Arguments
WandbCallback accepts the following arguments:
Args
Description
log
Whether to log the model’s: gradients , parameters, all or None (default). Losses & metrics are always logged.
log_preds
whether we want to log prediction samples (default to True).
log_preds_every_epoch
whether to log predictions every epoch or at the end (default to False)
log_model
whether we want to log our model (default to False). This also requires SaveModelCallback
model_name
The name of the file to save, overrides SaveModelCallback
log_dataset
False (default)
True will log folder referenced by learn.dls.path.
a path can be defined explicitly to reference which folder to log.
Note: subfolder “models” is always ignored.
dataset_name
name of logged dataset (default to folder name).
valid_dl
DataLoaders containing items used for prediction samples (default to random items from learn.dls.valid.
n_preds
number of logged predictions (default to 36).
seed
used for defining random samples.
For custom workflows, you can manually log your datasets and models:
log_dataset(path, name=None, metadata={})
log_model(path, name=None, metadata={})
Note: any subfolder “models” will be ignored.
Distributed Training
fastai supports distributed training by using the context manager distrib_ctx. W&B supports this automatically and enables you to track your Multi-GPU experiments out of the box.
In the examples above, wandb launches one run per process. At the end of the training, you will end up with two runs. This can sometimes be confusing, and you may want to log only on the main process. To do so, you will have to detect in which process you are manually and avoid creating runs (calling wandb.init in all other processes)
This documentation is for fastai v1.
If you use the current version of fastai, you should refer to fastai page.
For scripts using fastai v1, we have a callback that can automatically log model topology, losses, metrics, weights, gradients, sample predictions and best trained model.
The Hugging Face Transformers library makes state-of-the-art NLP models like BERT and training techniques like mixed precision and gradient checkpointing easy to use. The W&B integration adds rich, flexible experiment tracking and model versioning to interactive centralized dashboards without compromising that ease of use.
Next-level logging in few lines
os.environ["WANDB_PROJECT"] ="<my-amazing-project>"# name your W&B projectos.environ["WANDB_LOG_MODEL"] ="checkpoint"# log all model checkpointsfrom transformers import TrainingArguments, Trainer
args = TrainingArguments(..., report_to="wandb") # turn on W&B loggingtrainer = Trainer(..., args=args)
If you’d rather dive straight into working code, check out this Google Colab.
To log in with your training script, you’ll need to sign in to you account at www.wandb.ai, then you will find your API key on theAuthorize page.
If you are using Weights and Biases for the first time you might want to check out our quickstart
pip install wandb
wandb login
!pip install wandb
import wandb
wandb.login()
2. Name the project
A W&B Project is where all of the charts, data, and models logged from related runs are stored. Naming your project helps you organize your work and keep all the information about a single project in one place.
To add a run to a project simply set the WANDB_PROJECT environment variable to the name of your project. The WandbCallback will pick up this project name environment variable and use it when setting up your run.
WANDB_PROJECT=amazon_sentiment_analysis
%env WANDB_PROJECT=amazon_sentiment_analysis
import os
os.environ["WANDB_PROJECT"]="amazon_sentiment_analysis"
Make sure you set the project name before you initialize the Trainer.
If a project name is not specified the project name defaults to huggingface.
3. Log your training runs to W&B
This is the most important step when defining your Trainer training arguments, either inside your code or from the command line, is to set report_to to "wandb" in order enable logging with Weights & Biases.
The logging_steps argument in TrainingArguments will control how often training metrics are pushed to W&B during training. You can also give a name to the training run in W&B using the run_name argument.
That’s it. Now your models will log losses, evaluation metrics, model topology, and gradients to Weights & Biases while they train.
python run_glue.py \ # run your Python script --report_to wandb \ # enable logging to W&B --run_name bert-base-high-lr \ # name of the W&B run (optional)# other command line arguments here
from transformers import TrainingArguments, Trainer
args = TrainingArguments(
# other args and kwargs here report_to="wandb", # enable logging to W&B run_name="bert-base-high-lr", # name of the W&B run (optional) logging_steps=1, # how often to log to W&B)
trainer = Trainer(
# other args and kwargs here args=args, # your training args)
trainer.train() # start training and logging to W&B
Using TensorFlow? Just swap the PyTorch Trainer for the TensorFlow TFTrainer.
4. Turn on model checkpointing
Using Weights & Biases’ Artifacts, you can store up to 100GB of models and datasets for free and then use the Weights & Biases Model Registry to register models to prepare them for staging or deployment in your production environment.
Logging your Hugging Face model checkpoints to Artifacts can be done by setting the WANDB_LOG_MODEL environment variable to one of end or checkpoint or false:
checkpoint: a checkpoint will be uploaded every args.save_steps from the TrainingArguments.
end: the model will be uploaded at the end of training.
Use WANDB_LOG_MODEL along with load_best_model_at_end to upload the best model at the end of training.
import os
os.environ["WANDB_LOG_MODEL"] ="checkpoint"
WANDB_LOG_MODEL="checkpoint"
%env WANDB_LOG_MODEL="checkpoint"
Any Transformers Trainer you initialize from now on will upload models to your W&B project. The model checkpoints you log will be viewable through the Artifacts UI, and include the full model lineage (see an example model checkpoint in the UI here).
By default, your model will be saved to W&B Artifacts as model-{run_id} when WANDB_LOG_MODEL is set to end or checkpoint-{run_id} when WANDB_LOG_MODEL is set to checkpoint.
However, If you pass a run_name in your TrainingArguments, the model will be saved as model-{run_name} or checkpoint-{run_name}.
W&B Model Registry
Once you have logged your checkpoints to Artifacts, you can then register your best model checkpoints and centralize them across your team using the Weights & Biases Model Registry. Here you can organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and automate downstream actions with webhooks or jobs.
See the Model Registry documentation for how to link a model Artifact to the Model Registry.
5. Visualise evaluation outputs during training
Visualing your model outputs during training or evaluation is often essential to really understand how your model is training.
By using the callbacks system in the Transformers Trainer, you can log additional helpful data to W&B such as your models’ text generation outputs or other predictions to W&B Tables.
See the Custom logging section below for a full guide on how to log evaluation outupts while training to log to a W&B Table like this:
6. Finish your W&B Run (Notebook only)
If your training is encapsulated in a Python script, the W&B run will end when your script finishes.
If you are using a Jupyter or Google Colab notebook, you’ll need to tell us when you’re done with training by calling wandb.finish().
trainer.train() # start training and logging to W&B# post-training analysis, testing, other logged codewandb.finish()
7. Visualize your results
Once you have logged your training results you can explore your results dynamically in the W&B Dashboard. It’s easy to compare across dozens of runs at once, zoom in on interesting findings, and coax insights out of complex data with flexible, interactive visualizations.
Advanced features and FAQs
How do I save the best model?
If load_best_model_at_end=True is set in the TrainingArguments that are passed to the Trainer, then W&B will save the best performing model checkpoint to Artifacts.
If you’d like to centralize all your best model versions across your team to organize them by ML task, stage them for production, bookmark them for further evaluation, or kick off downstream Model CI/CD processes then ensure you’re saving your model checkpoints to Artifacts. Once logged to Artifacts, these checkpoints can then be promoted to the Model Registry.
How do I load a saved model?
If you saved your model to W&B Artifacts with WANDB_LOG_MODEL, you can download your model weights for additional training or to run inference. You just load them back into the same Hugging Face architecture that you used before.
# Create a new runwith wandb.init(project="amazon_sentiment_analysis") as run:
# Pass the name and version of Artifact my_model_name ="model-bert-base-high-lr:latest" my_model_artifact = run.use_artifact(my_model_name)
# Download model weights to a folder and return the path model_dir = my_model_artifact.download()
# Load your Hugging Face model from that folder# using the same model class model = AutoModelForSequenceClassification.from_pretrained(
model_dir, num_labels=num_labels
)
# Do additional training, or run inference
How do I resume training from a checkpoint?
If you had set WANDB_LOG_MODEL='checkpoint' you can also resume training by you can using the model_dir as the model_name_or_path argument in your TrainingArguments and pass resume_from_checkpoint=True to Trainer.
last_run_id ="xxxxxxxx"# fetch the run_id from your wandb workspace# resume the wandb run from the run_idwith wandb.init(
project=os.environ["WANDB_PROJECT"],
id=last_run_id,
resume="must",
) as run:
# Connect an Artifact to the run my_checkpoint_name =f"checkpoint-{last_run_id}:latest" my_checkpoint_artifact = run.use_artifact(my_model_name)
# Download checkpoint to a folder and return the path checkpoint_dir = my_checkpoint_artifact.download()
# reinitialize your model and trainer model = AutoModelForSequenceClassification.from_pretrained(
"<model_name>", num_labels=num_labels
)
# your awesome training arguments here. training_args = TrainingArguments()
trainer = Trainer(model=model, args=training_args)
# make sure use the checkpoint dir to resume training from the checkpoint trainer.train(resume_from_checkpoint=checkpoint_dir)
How do I log and view evaluation samples during training
Logging to Weights & Biases via the Transformers Trainer is taken care of by the WandbCallback in the Transformers library. If you need to customize your Hugging Face logging you can modify this callback by subclassing WandbCallback and adding additional functionality that leverages additional methods from the Trainer class.
Below is the general pattern to add this new callback to the HF Trainer, and further down is a code-complete example to log evaluation outputs to a W&B Table:
# Instantiate the Trainer as normaltrainer = Trainer()
# Instantiate the new logging callback, passing it the Trainer objectevals_callback = WandbEvalsCallback(trainer, tokenizer, ...)
# Add the callback to the Trainertrainer.add_callback(evals_callback)
# Begin Trainer training as normaltrainer.train()
View evaluation samples during training
The following section shows how to customize the WandbCallback to run model predictions and log evaluation samples to a W&B Table during training. We will every eval_steps using the on_evaluate method of the Trainer callback.
Here, we wrote a decode_predictions function to decode the predictions and labels from the model output using the tokenizer.
Then, we create a pandas DataFrame from the predictions and labels and add an epoch column to the DataFrame.
Finally, we create a wandb.Table from the DataFrame and log it to wandb.
Additionally, we can control the frequency of logging by logging the predictions every freq epochs.
Note: Unlike the regular WandbCallback this custom callback needs to be added to the trainer after the Trainer is instantiated and not during initialization of the Trainer.
This is because the Trainer instance is passed to the callback during initialization.
from transformers.integrations import WandbCallback
import pandas as pd
defdecode_predictions(tokenizer, predictions):
labels = tokenizer.batch_decode(predictions.label_ids)
logits = predictions.predictions.argmax(axis=-1)
prediction_text = tokenizer.batch_decode(logits)
return {"labels": labels, "predictions": prediction_text}
classWandbPredictionProgressCallback(WandbCallback):
"""Custom WandbCallback to log model predictions during training.
This callback logs model predictions and labels to a wandb.Table at each
logging step during training. It allows to visualize the
model predictions as the training progresses.
Attributes:
trainer (Trainer): The Hugging Face Trainer instance.
tokenizer (AutoTokenizer): The tokenizer associated with the model.
sample_dataset (Dataset): A subset of the validation dataset
for generating predictions.
num_samples (int, optional): Number of samples to select from
the validation dataset for generating predictions. Defaults to 100.
freq (int, optional): Frequency of logging. Defaults to 2.
"""def __init__(self, trainer, tokenizer, val_dataset, num_samples=100, freq=2):
"""Initializes the WandbPredictionProgressCallback instance.
Args:
trainer (Trainer): The Hugging Face Trainer instance.
tokenizer (AutoTokenizer): The tokenizer associated
with the model.
val_dataset (Dataset): The validation dataset.
num_samples (int, optional): Number of samples to select from
the validation dataset for generating predictions.
Defaults to 100.
freq (int, optional): Frequency of logging. Defaults to 2.
""" super().__init__()
self.trainer = trainer
self.tokenizer = tokenizer
self.sample_dataset = val_dataset.select(range(num_samples))
self.freq = freq
defon_evaluate(self, args, state, control, **kwargs):
super().on_evaluate(args, state, control, **kwargs)
# control the frequency of logging by logging the predictions# every `freq` epochsif state.epoch % self.freq ==0:
# generate predictions predictions = self.trainer.predict(self.sample_dataset)
# decode predictions and labels predictions = decode_predictions(self.tokenizer, predictions)
# add predictions to a wandb.Table predictions_df = pd.DataFrame(predictions)
predictions_df["epoch"] = state.epoch
records_table = self._wandb.Table(dataframe=predictions_df)
# log the table to wandb self._wandb.log({"sample_predictions": records_table})
# First, instantiate the Trainertrainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_datasets["train"],
eval_dataset=lm_datasets["validation"],
)
# Instantiate the WandbPredictionProgressCallbackprogress_callback = WandbPredictionProgressCallback(
trainer=trainer,
tokenizer=tokenizer,
val_dataset=lm_dataset["validation"],
num_samples=10,
freq=2,
)
# Add the callback to the trainertrainer.add_callback(progress_callback)
For a more detailed example please refer to this colab
What additional W&B settings are available?
Further configuration of what is logged with Trainer is possible by setting environment variables. A full list of W&B environment variables can be found here.
Environment Variable
Usage
WANDB_PROJECT
Give your project a name (huggingface by default)
WANDB_LOG_MODEL
Log the model checkpoint as a W&B Artifact (false by default)
false (default): No model checkpointing
checkpoint: A checkpoint will be uploaded every args.save_steps (set in the Trainer’s TrainingArguments).
end: The final model checkpoint will be uploaded at the end of training.
WANDB_WATCH
Set whether you’d like to log your models gradients, parameters or neither
false (default): No gradient or parameter logging
gradients: Log histograms of the gradients
all: Log histograms of gradients and parameters
WANDB_DISABLED
Set to true to turn off logging entirely (false by default)
WANDB_SILENT
Set to true to silence the output printed by wandb (false by default)
WANDB_WATCH=all
WANDB_SILENT=true
%env WANDB_WATCH=all
%env WANDB_SILENT=true
How do I customize wandb.init?
The WandbCallback that Trainer uses will call wandb.init under the hood when Trainer is initialized. You can alternatively set up your runs manually by calling wandb.init before theTrainer is initialized. This gives you full control over your W&B run configuration.
Below are 6 Transformers and W&B related articles you might enjoy
Hyperparameter Optimization for Hugging Face Transformers
Three strategies for hyperparameter optimization for Hugging Face Transformers are compared: Grid Search, Bayesian Optimization, and Population Based Training.
We use a standard uncased BERT model from Hugging Face transformers, and we want to fine-tune on the RTE dataset from the SuperGLUE benchmark
Results show that Population Based Training is the most effective approach to hyperparameter optimization of our Hugging Face transformer model.
In the article, the author demonstrates how to fine-tune a pre-trained GPT2 HuggingFace Transformer model on anyone’s Tweets in five minutes.
The model uses the following pipeline: Downloading Tweets, Optimizing the Dataset, Initial Experiments, Comparing Losses Between Users, Fine-Tuning the Model.
Sentence Classification With Hugging Face BERT and WB
In this article, we’ll build a sentence classifier leveraging the power of recent breakthroughs in Natural Language Processing, focusing on an application of transfer learning to NLP.
We’ll be using The Corpus of Linguistic Acceptability (CoLA) dataset for single sentence classification, which is a set of sentences labeled as grammatically correct or incorrect that was first published in May 2018.
We’ll use Google’s BERT to create high performance models with minimal effort on a range of NLP tasks.
A Step by Step Guide to Tracking Hugging Face Model Performance
We use Weights & Biases and Hugging Face transformers to train DistilBERT, a Transformer that’s 40% smaller than BERT but retains 97% of BERT’s accuracy, on the GLUE benchmark
The GLUE benchmark is a collection of nine datasets and tasks for training NLP models
Hugging Face Diffusers is the go-to library for state-of-the-art pre-trained diffusion models for generating images, audio, and even 3D structures of molecules. The W&B integration adds rich, flexible experiment tracking, media visualization, pipeline architecture, and configuration management to interactive centralized dashboards without compromising that ease of use.
Next-level logging in just two lines
Log all the prompts, negative prompts, generated media, and configs associated with your experiment by simply including 2 lines of code. Here are the 2 lines of code to begin logging:
# import the autolog functionfrom wandb.integration.diffusers import autolog
# call the autolog before calling the pipelineautolog(init=dict(project="diffusers_logging"))
An example of how the results of your experiment are logged.
Get started
Install diffusers, transformers, accelerate, and wandb.
Use autolog to initialize a Weights & Biases run and automatically track the inputs and the outputs from all supported pipeline calls.
You can call the autolog() function with the init parameter, which accepts a dictionary of parameters required by wandb.init().
When you call autolog(), it initializes a Weights & Biases run and automatically tracks the inputs and the outputs from all supported pipeline calls.
Each pipeline call is tracked into its own table in the workspace, and the configs associated with the pipeline call is appended to the list of workflows in the configs for that run.
The prompts, negative prompts, and the generated media are logged in a wandb.Table.
All other configs associated with the experiment including seed and the pipeline architecture are stored in the config section for the run.
The generated media for each pipeline call are also logged in media panels in the run.
You can find a list of supported pipeline calls [here](https://github.com/wandb/wandb/blob/main/wandb/integration/diffusers/autologger.py#L12-L72). In case, you want to request a new feature of this integration or report a bug associated with it, please open an issue on [https://github.com/wandb/wandb/issues](https://github.com/wandb/wandb/issues).
Examples
Autologging
Here is a brief end-to-end example of the autolog in action:
import torch
from diffusers import DiffusionPipeline
# import the autolog functionfrom wandb.integration.diffusers import autolog
# call the autolog before calling the pipelineautolog(init=dict(project="diffusers_logging"))
# Initialize the diffusion pipelinepipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
).to("cuda")
# Define the prompts, negative prompts, and seed.prompt = ["a photograph of an astronaut riding a horse", "a photograph of a dragon"]
negative_prompt = ["ugly, deformed", "ugly, deformed"]
generator = torch.Generator(device="cpu").manual_seed(10)
# call the pipeline to generate the imagesimages = pipeline(
prompt,
negative_prompt=negative_prompt,
num_images_per_prompt=2,
generator=generator,
)
import torch
from diffusers import DiffusionPipeline
import wandb
# import the autolog functionfrom wandb.integration.diffusers import autolog
# call the autolog before calling the pipelineautolog(init=dict(project="diffusers_logging"))
# Initialize the diffusion pipelinepipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
).to("cuda")
# Define the prompts, negative prompts, and seed.prompt = ["a photograph of an astronaut riding a horse", "a photograph of a dragon"]
negative_prompt = ["ugly, deformed", "ugly, deformed"]
generator = torch.Generator(device="cpu").manual_seed(10)
# call the pipeline to generate the imagesimages = pipeline(
prompt,
negative_prompt=negative_prompt,
num_images_per_prompt=2,
generator=generator,
)
# Finish the experimentwandb.finish()
The results of a single experiment:
The results of multiple experiments:
The config of an experiment:
You need to explicitly call wandb.finish() when executing the code in IPython notebook environments after calling the pipeline. This is not necessary when executing python scripts.
Hugging Face AutoTrain is a no-code tool for training state-of-the-art models for Natural Language Processing (NLP) tasks, for Computer Vision (CV) tasks, and for Speech tasks and even for Tabular tasks.
Weights & Biases is directly integrated into Hugging Face AutoTrain, providing experiment tracking and config management. It’s as easy as using a single parameter in the CLI command for your experiments
Install prerequisites
Install autotrain-advanced and wandb.
pip install --upgrade autotrain-advanced wandb
!pip install --upgrade autotrain-advanced wandb
To demonstrate these changes, this page fine-tines an LLM on a math dataset to achieve SoTA result in pass@1 on the GSM8k Benchmarks.
Prepare the dataset
Hugging Face AutoTrain expects your CSV custom dataset to have a specific format to work properly.
Your training file must contain a text column, which the training uses. For best results, the text column’s data must conform to the ### Human: Question?### Assistant: Answer. format. Review a great example in timdettmers/openassistant-guanaco.
However, the MetaMathQA dataset includes the columns query, response, and type. First, pre-process this dataset. Remove the type column and combine the content of the query and response columns into a new text column in the ### Human: Query?### Assistant: Response. format. Training uses the resulting dataset, rishiraj/guanaco-style-metamath.
Train using autotrain
You can start training using the autotrain advanced from the command line or a notebook. Use the --log argument, or use --log wandb to log your results to a W&B run.
Training and inference at scale made simple, efficient and adaptable
Hugging Face Accelerate is a library that enables the same PyTorch code to run across any distributed configuration, to simplify model training and inference at scale.
Accelerate includes a Weights & Biases Tracker which we show how to use below. You can also read more about Accelerate Trackers in their docs here
Start logging with Accelerate
To get started with Accelerate and Weights & Biases you can follow the pseudocode below:
from accelerate import Accelerator
# Tell the Accelerator object to log with wandbaccelerator = Accelerator(log_with="wandb")
# Initialise your wandb run, passing wandb parameters and any config informationaccelerator.init_trackers(
project_name="my_project",
config={"dropout": 0.1, "learning_rate": 1e-2}
init_kwargs={"wandb": {"entity": "my-wandb-team"}}
)
...# Log to wandb by calling `accelerator.log`, `step` is optionalaccelerator.log({"train_loss": 1.12, "valid_loss": 0.8}, step=global_step)
# Make sure that the wandb tracker finishes correctlyaccelerator.end_training()
Explaining more, you need to:
Pass log_with="wandb" when initialising the Accelerator class
any parameters you want to pass to wandb.init via a nested dict to init_kwargs
any other experiment config information you want to log to your wandb run, via config
Use the .log method to log to Weigths & Biases; the step argument is optional
Call .end_training when finished training
Access the W&B tracker
To access the W&B tracker, use the Accelerator.get_tracker() method. Pass in the string corresponding to a tracker’s .name attribute, which returns the tracker on the main process.
wandb_tracker = accelerator.get_tracker("wandb")
From there you can interact with wandb’s run object like normal:
wandb_tracker.log_artifact(some_artifact_to_log)
Trackers built in Accelerate will automatically execute on the correct process, so if a tracker is only meant to be ran on the main process it will do so automatically.
If you want to truly remove Accelerate’s wrapping entirely, you can achieve the same outcome with:
wandb_tracker = accelerator.get_tracker("wandb", unwrap=True)
with accelerator.on_main_process:
wandb_tracker.log_artifact(some_artifact_to_log)
Accelerate Articles
Below is an Accelerate article you may enjoy
HuggingFace Accelerate Super Charged With Weights & Biases
In this article, we’ll look at what HuggingFace Accelerate has to offer and how simple it is to perform distributed training and evaluation, while logging results to Weights & Biases
Hydra is an open-source Python framework that simplifies the development of research and other complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line.
You can continue to use Hydra for configuration management while taking advantage of the power of W&B.
Track metrics
Track your metrics as normal with wandb.init and wandb.log . Here, wandb.entity and wandb.project are defined within a hydra configuration file.
Hydra uses omegaconf as the default way to interface with configuration dictionaries. OmegaConf’s dictionary are not a subclass of primitive dictionaries so directly passing Hydra’s Config to wandb.config leads to unexpected results on the dashboard. It’s necessary to convert omegaconf.DictConfig to the primitive dict type before passing to wandb.config.
If your process hangs when started, this may be caused by this known issue. To solve this, try to changing wandb’s multiprocessing protocol either by adding an extra settings parameter to `wandb.init` as:
or by setting a global environment variable from your shell:
$ export WANDB_START_METHOD=thread
Optimize Hyperparameters
W&B Sweeps is a highly scalable hyperparameter search platform, which provides interesting insights and visualization about W&B experiments with minimal requirements code real-estate. Sweeps integrates seamlessly with Hydra projects with no-coding requirements. The only thing needed is a configuration file describing the various parameters to sweep over as normal.
W&B automatically creates a sweep inside your project and returns a wandb agent command for you to run on each machine you want to run your sweep.
Pass parameters not present in Hydra defaults
Hydra supports passing extra parameters through the command line which aren’t present in the default configuration file, by using a + before command. For example, you can pass an extra parameter with some value by simply calling:
$ python program.py +experiment=some_experiment
You cannot sweep over such + configurations similar to what one does while configuring Hydra Experiments. To work around this, you can initialize the experiment parameter with a default empty file and use W&B Sweep to override those empty configs on each call. For more information, read this W&B Report.
W&B has three callbacks for Keras, available from wandb v0.13.4. For the legacy WandbCallback scroll down.
WandbMetricsLogger : Use this callback for Experiment Tracking. It logs your training and validation metrics along with system metrics to Weights and Biases.
WandbModelCheckpoint : Use this callback to log your model checkpoints to Weight and Biases Artifacts.
WandbEvalCallback: This base callback logs model predictions to Weights and Biases Tables for interactive visualization.
These new callbacks:
Adhere to Keras design philosophy.
Reduce the cognitive load of using a single callback (WandbCallback) for everything.
Make it easy for Keras users to modify the callback by subclassing it to support their niche use case.
WandbMetricsLogger automatically logs Keras’ logs dictionary that callback methods such as on_epoch_end, on_batch_end etc, take as an argument.
This tracks:
Training and validation metrics defined in model.compile.
System (CPU/GPU/TPU) metrics.
Learning rate (both for a fixed value or a learning rate scheduler.
import wandb
from wandb.integration.keras import WandbMetricsLogger
# Initialize a new W&B runwandb.init(config={"bs": 12})
# Pass the WandbMetricsLogger to model.fitmodel.fit(
X_train, y_train, validation_data=(X_test, y_test), callbacks=[WandbMetricsLogger()]
)
WandbMetricsLogger reference
Parameter
Description
log_freq
(epoch, batch, or an int): if epoch, logs metrics at the end of each epoch. If batch, logs metrics at the end of each batch. If an int, logs metrics at the end of that many batches. Defaults to epoch.
initial_global_step
(int): Use this argument to correctly log the learning rate when you resume training from some initial_epoch, and a learning rate scheduler is used. This can be computed as step_size * initial_step. Defaults to 0.
Use WandbModelCheckpoint callback to save the Keras model (SavedModel format) or model weights periodically and uploads them to W&B as a wandb.Artifact for model versioning.
This callback is subclassed from tf.keras.callbacks.ModelCheckpoint ,thus the checkpointing logic is taken care of by the parent callback.
This callback saves:
The model that has achieved best performance based on the monitor.
The model at the end of every epoch regardless of the performance.
The model at the end of the epoch or after a fixed number of training batches.
Only model weights or the whole model.
The model either in SavedModel format or in .h5 format.
Use this callback in conjunction with WandbMetricsLogger.
import wandb
from wandb.integration.keras import WandbMetricsLogger, WandbModelCheckpoint
# Initialize a new W&B runwandb.init(config={"bs": 12})
# Pass the WandbModelCheckpoint to model.fitmodel.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
callbacks=[
WandbMetricsLogger(),
WandbModelCheckpoint("models"),
],
)
WandbModelCheckpoint reference
Parameter
Description
filepath
(str): path to save the mode file.
monitor
(str): The metric name to monitor.
verbose
(int): Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action.
save_best_only
(Boolean): if save_best_only=True, it only saves the latest model or the model it considers the best, according to the defined by the monitor and mode attributes.
save_weights_only
(Boolean): if True, saves only the model’s weights.
mode
(auto, min, or max): For val_acc, set it to max, for val_loss, set it to min, and so on
save_freq
(“epoch” or int): When using ‘epoch’, the callback saves the model after each epoch. When using an integer, the callback saves the model at end of this many batches. Note that when monitoring validation metrics such as val_acc or val_loss, save_freq must be set to “epoch” as those metrics are only available at the end of an epoch.
options
(str): Optional tf.train.CheckpointOptions object if save_weights_only is true or optional tf.saved_model.SaveOptions object if save_weights_only is false.
initial_value_threshold
(float): Floating point initial “best” value of the metric to be monitored.
Log checkpoints after N epochs
By default (save_freq="epoch"), the callback creates a checkpoint and uploads it as an artifact after each epoch. To create a checkpoint after a specific number of batches, set save_freq to an integer. To checkpoint after N epochs, compute the cardinality of the train dataloader and pass it to save_freq:
While checkpointing on TPUs you might encounter UnimplementedError: File system scheme '[local]' not implemented error message. This happens because the model directory (filepath) must use a cloud storage bucket path (gs://bucket-name/...), and this bucket must be accessible from the TPU server. We can however, use the local path for checkpointing which in turn is uploaded as an Artifacts.
The WandbEvalCallback is an abstract base class to build Keras callbacks primarily for model prediction and, secondarily, dataset visualization.
This abstract callback is agnostic with respect to the dataset and the task. To use this, inherit from this base WandbEvalCallback callback class and implement the add_ground_truth and add_model_prediction methods.
The WandbEvalCallback is a utility class that provides methods to:
Create data and prediction wandb.Table instances.
Log data and prediction Tables as wandb.Artifact.
Log the data table on_train_begin.
log the prediction table on_epoch_end.
The following example uses WandbClfEvalCallback for an image classification task. This example callback logs the validation data (data_table) to W&B, performs inference, and logs the prediction (pred_table) to W&B at the end of every epoch.
import wandb
from wandb.integration.keras import WandbMetricsLogger, WandbEvalCallback
# Implement your model prediction visualization callbackclassWandbClfEvalCallback(WandbEvalCallback):
def __init__(
self, validation_data, data_table_columns, pred_table_columns, num_samples=100 ):
super().__init__(data_table_columns, pred_table_columns)
self.x = validation_data[0]
self.y = validation_data[1]
defadd_ground_truth(self, logs=None):
for idx, (image, label) in enumerate(zip(self.x, self.y)):
self.data_table.add_data(idx, wandb.Image(image), label)
defadd_model_predictions(self, epoch, logs=None):
preds = self.model.predict(self.x, verbose=0)
preds = tf.argmax(preds, axis=-1)
table_idxs = self.data_table_ref.get_index()
for idx in table_idxs:
pred = preds[idx]
self.pred_table.add_data(
epoch,
self.data_table_ref.data[idx][0],
self.data_table_ref.data[idx][1],
self.data_table_ref.data[idx][2],
pred,
)
# ...# Initialize a new W&B runwandb.init(config={"hyper": "parameter"})
# Add the Callbacks to Model.fitmodel.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
callbacks=[
WandbMetricsLogger(),
WandbClfEvalCallback(
validation_data=(X_test, y_test),
data_table_columns=["idx", "image", "label"],
pred_table_columns=["epoch", "idx", "image", "label", "pred"],
),
],
)
The W&B Artifact page includes Table logs by default, rather than the Workspace page.
WandbEvalCallback reference
Parameter
Description
data_table_columns
(list) List of column names for the data_table
pred_table_columns
(list) List of column names for the pred_table
Memory footprint details
We log the data_table to W&B when the on_train_begin method is invoked. Once it’s uploaded as a W&B Artifact, we get a reference to this table which can be accessed using data_table_ref class variable. The data_table_ref is a 2D list that can be indexed like self.data_table_ref[idx][n], where idx is the row number while n is the column number. Let’s see the usage in the example below.
Customize the callback
You can override the on_train_begin or on_epoch_end methods to have more fine-grained control. If you want to log the samples after N batches, you can implement on_train_batch_end method.
💡 If you are implementing a callback for model prediction visualization by inheriting WandbEvalCallback and something needs to be clarified or fixed, please let us know by opening an issue.
WandbCallback [legacy]
Use the W&B library WandbCallback Class to automatically save all the metrics and the loss values tracked in model.fit.
import wandb
from wandb.integration.keras import WandbCallback
wandb.init(config={"hyper": "parameter"})
...# code to set up your model in Keras# Pass the callback to model.fitmodel.fit(
X_train, y_train, validation_data=(X_test, y_test), callbacks=[WandbCallback()]
)
The WandbCallback class supports a wide variety of logging configuration options: specifying a metric to monitor, tracking of weights and gradients, logging of predictions on training_data and validation_data, and more.
Automatically logs history data from any metrics collected by Keras: loss and anything passed into keras_model.compile().
Sets summary metrics for the run associated with the “best” training step, as defined by the monitor and mode attributes. This defaults to the epoch with the minimum val_loss. WandbCallback by default saves the model associated with the best epoch.
Optionally logs gradient and parameter histogram.
Optionally saves training and validation data for wandb to visualize.
WandbCallback reference
Arguments
monitor
(str) name of metric to monitor. Defaults to val_loss.
mode
(str) one of {auto, min, max}. min - save model when monitor is minimized max - save model when monitor is maximized auto - try to guess when to save the model (default).
save_model
True - save a model when monitor beats all previous epochs False - don’t save models
save_graph
(boolean) if True save model graph to wandb (default to True).
save_weights_only
(boolean) if True, saves only the model’s weights(model.save_weights(filepath)). Otherwise, saves the full model).
log_weights
(boolean) if True save histograms of the model’s layer’s weights.
log_gradients
(boolean) if True log histograms of the training gradients
training_data
(tuple) Same format (X,y) as passed to model.fit. This is needed for calculating gradients - this is mandatory if log_gradients is True.
validation_data
(tuple) Same format (X,y) as passed to model.fit. A set of data for wandb to visualize. If you set this field, every epoch, wandb makes a small number of predictions and saves the results for later visualization.
generator
(generator) a generator that returns validation data for wandb to visualize. This generator should return tuples (X,y). Either validate_data or generator should be set for wandb to visualize specific data examples.
validation_steps
(int) if validation_data is a generator, how many steps to run the generator for the full validation set.
labels
(list) If you are visualizing your data with wandb this list of labels converts numeric output to understandable string if you are building a classifier with multiple classes. For a binary classifier, you can pass in a list of two labels [label for false, label for true]. If validate_data and generator are both false, this does nothing.
predictions
(int) the number of predictions to make for visualization each epoch, max is 100.
input_type
(string) type of the model input to help visualization. can be one of: (image, images, segmentation_mask).
output_type
(string) type of the model output to help visualziation. can be one of: (image, images, segmentation_mask).
log_evaluation
(boolean) if True, save a Table containing validation data and the model’s predictions at each epoch. See validation_indexes, validation_row_processor, and output_row_processor for additional details.
class_colors
([float, float, float]) if the input or output is a segmentation mask, an array containing an rgb tuple (range 0-1) for each class.
log_batch_frequency
(integer) if None, callback logs every epoch. If set to integer, callback logs training metrics every log_batch_frequency batches.
log_best_prefix
(string) if None, saves no extra summary metrics. If set to a string, prepends the monitored metric and epoch with the prefix and saves the results as summary metrics.
validation_indexes
([wandb.data_types._TableLinkMixin]) an ordered list of index keys to associate with each validation example. If log_evaluation is True and you provide validation_indexes, does not create a Table of validation data. Instead, associates each prediction with the row represented by the TableLinkMixin. To obtain a list of row keys, use Table.get_index() .
validation_row_processor
(Callable) a function to apply to the validation data, commonly used to visualize the data. The function receives an ndx (int) and a row (dict). If your model has a single input, then row["input"] contains the input data for the row. Otherwise, it contains the names of the input slots. If your fit function takes a single target, then row["target"] contains the target data for the row. Otherwise, it contains the names of the output slots. For example, if your input data is a single array, to visualize the data as an Image, provide lambda ndx, row: {"img": wandb.Image(row["input"])} as the processor. Ignored if log_evaluation is False or validation_indexes are present.
output_row_processor
(Callable) same as validation_row_processor, but applied to the model’s output. row["output"] contains the results of the model output.
infer_missing_processors
(Boolean) Determines whether to infer validation_row_processor and output_row_processor if they are missing. Defaults to True. If you provide labels, W&B attempts to infer classification-type processors where appropriate.
log_evaluation_frequency
(int) Determines how often to log evaluation results. Defaults to 0 to log only at the end of training. Set to 1 to log every epoch, 2 to log every other epoch, and so on. Has no effect when log_evaluation is False.
Frequently Asked Questions
How do I use Keras multiprocessing with wandb?
When setting use_multiprocessing=True, this error may occur:
Error("You must call wandb.init() before wandb.config.batch_size")
To work around it:
In the Sequence class construction, add: wandb.init(group='...').
In main, make sure you’re using if __name__ == "__main__": and put the rest of your script logic inside it.
6.17 - Kubeflow Pipelines (kfp)
How to integrate W&B with Kubeflow Pipelines.
Overview
Kubeflow Pipelines (kfp) is a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers.
This integration lets users apply decorators to kfp python functional components to automatically log parameters and artifacts to W&B.
This feature was enabled in wandb==0.12.11 and requires kfp<2.0.0
Quickstart
Install W&B and login
!pip install kfp wandb
import wandb
wandb.login()
pip install kfp wandb
wandb login
Decorate your components
Add the @wandb_log decorator and create your components as usual. This will automatically log the input/outputs parameters and artifacts to W&B each time you run your pipeline.
from kfp import components
from wandb.integration.kfp import wandb_log
@wandb_logdefadd(a: float, b: float) -> float:
return a + b
add = components.create_component_from_func(add)
Pass environment variables to containers
You may need to explicitly pass environment variables to your containers. For two-way linking, you should also set the environment variables WANDB_KUBEFLOW_URL to the base URL of your Kubeflow Pipelines instance. For example, https://kubeflow.mysite.com.
import os
from kubernetes.client.models import V1EnvVar
defadd_wandb_env_variables(op):
env = {
"WANDB_API_KEY": os.getenv("WANDB_API_KEY"),
"WANDB_BASE_URL": os.getenv("WANDB_BASE_URL"),
}
for name, value in env.items():
op = op.add_env_variable(V1EnvVar(name, value))
return op
@dsl.pipeline(name="example-pipeline")
defexample_pipeline(param1: str, param2: int):
conf = dsl.get_pipeline_conf()
conf.add_op_transformer(add_wandb_env_variables)
Access your data programmatically
Via the Kubeflow Pipelines UI
Click on any Run in the Kubeflow Pipelines UI that has been logged with W&B.
Find details about inputs and outputs in the Input/Output and ML Metadata tabs.
View the W&B web app from the Visualizations tab.
Via the web app UI
The web app UI has the same content as the Visualizations tab in Kubeflow Pipelines, but with more space. Learn more about the web app UI here.
If you want finer control of logging, you can sprinkle in wandb.log and wandb.log_artifact calls in the component.
With explicit wandb.log_artifacts calls
In this example below, we are training a model. The @wandb_log decorator will automatically track the relevant inputs and outputs. If you want to log the training process, you can explicitly add that logging like so:
The wandb library includes a special callback for LightGBM. It’s also easy to use the generic logging features of Weights & Biases to track large experiments, like hyperparameter sweeps.
from wandb.integration.lightgbm import wandb_callback, log_summary
import lightgbm as lgb
# Log metrics to W&Bgbm = lgb.train(..., callbacks=[wandb_callback()])
# Log feature importance plot and upload model checkpoint to W&Blog_summary(gbm, save_model_checkpoint=True)
Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes Sweeps, a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments.
To learn more about these tools and see an example of how to use Sweeps with XGBoost, check out this interactive Colab notebook.
Decorating a step turns logging off or on for certain types within that step.
In this example, all datasets and models in start will be logged
from wandb.integration.metaflow import wandb_log
classWandbExampleFlow(FlowSpec):
@wandb_log(datasets=True, models=True, settings=wandb.Settings(...))
@stepdefstart(self):
self.raw_df = pd.read_csv(...).# pd.DataFrame -> upload as dataset self.model_file = torch.load(...) # nn.Module -> upload as model self.next(self.transform)
Decorating a flow is equivalent to decorating all the constituent steps with a default.
In this case, all steps in WandbExampleFlow default to logging datasets and models by default, just like decorating each step with @wandb_log(datasets=True, models=True)
from wandb.integration.metaflow import wandb_log
@wandb_log(datasets=True, models=True) # decorate all @step classWandbExampleFlow(FlowSpec):
@stepdefstart(self):
self.raw_df = pd.read_csv(...).# pd.DataFrame -> upload as dataset self.model_file = torch.load(...) # nn.Module -> upload as model self.next(self.transform)
Decorating the flow is equivalent to decorating all steps with a default. That means if you later decorate a Step with another @wandb_log, it overrides the flow-level decoration.
In this example:
start and mid log both datasets and models.
end logs neither datasets nor models.
from wandb.integration.metaflow import wandb_log
@wandb_log(datasets=True, models=True) # same as decorating start and midclassWandbExampleFlow(FlowSpec):
# this step will log datasets and models@stepdefstart(self):
self.raw_df = pd.read_csv(...).# pd.DataFrame -> upload as dataset self.model_file = torch.load(...) # nn.Module -> upload as model self.next(self.mid)
# this step will also log datasets and models@stepdefmid(self):
self.raw_df = pd.read_csv(...).# pd.DataFrame -> upload as dataset self.model_file = torch.load(...) # nn.Module -> upload as model self.next(self.end)
# this step is overwritten and will NOT log datasets OR models@wandb_log(datasets=False, models=False)
@stepdefend(self):
self.raw_df = pd.read_csv(...). self.model_file = torch.load(...)
Access your data programmatically
You can access the information we’ve captured in three ways: inside the original Python process being logged using the wandb client library, with the web app UI, or programmatically using our Public API. Parameters are saved to W&B’s config and can be found in the Overview tab. datasets, models, and others are saved to W&B Artifacts and can be found in the Artifacts tab. Base python types are saved to W&B’s summary dict and can be found in the Overview tab. See our guide to the Public API for details on using the API to get this information programmatically from outside .
Cheat sheet
Data
Client library
UI
Parameter(...)
wandb.config
Overview tab, Config
datasets, models, others
wandb.use_artifact("{var_name}:latest")
Artifacts tab
Base Python types (dict, list, str, etc.)
wandb.summary
Overview tab, Summary
wandb_log kwargs
kwarg
Options
datasets
True: Log instance variables that are a dataset
False
models
True: Log instance variables that are a model
False
others
True: Log anything else that is serializable as a pickle
False
settings
wandb.Settings(…): Specify your own wandb settings for this step or flow
None: Equivalent to passing wandb.Settings()
By default, if:
settings.run_group is None, it will be set to {flow_name}/{run_id}
settings.run_job_type is None, it will be set to {run_job_type}/{step_name}
Frequently Asked Questions
What exactly do you log? Do you log all instance and local variables?
wandb_log only logs instance variables. Local variables are NEVER logged. This is useful to avoid logging unnecessary data.
Which data types get logged?
We currently support these types:
Logging Setting
Type
default (always on)
dict, list, set, str, int, float, bool
datasets
pd.DataFrame
pathlib.Path
models
nn.Module
sklearn.base.BaseEstimator
others
Anything that is pickle-able and JSON serializable
How can I configure logging behavior?
Kind of Variable
behavior
Example
Data Type
Instance
Auto-logged
self.accuracy
float
Instance
Logged if datasets=True
self.df
pd.DataFrame
Instance
Not logged if datasets=False
self.df
pd.DataFrame
Local
Never logged
accuracy
float
Local
Never logged
df
pd.DataFrame
Is artifact lineage tracked?
Yes. If you have an artifact that is an output of step A and an input to step B, we automatically construct the lineage DAG for you.
MMEngine by OpenMMLab is a foundational library for training deep learning models based on PyTorch. MMEngine implements a next-generation training architecture for the OpenMMLab algorithm library, providing a unified execution foundation for over 30 algorithm libraries within OpenMMLab. Its core components include the training engine, evaluation engine, and module management.
log additional records such as graph, images, scalars, etc.
Get started
Install openmim and wandb.
pip install -q -U openmim wandb
!pip install -q -U openmim wandb
Next, install mmengine and mmcv using mim.
mim install -q mmengine mmcv
!mim install -q mmengine mmcv
Use the WandbVisBackend with MMEngine Runner
This section demonstrates a typical workflow using WandbVisBackend using mmengine.runner.Runner.
Define a visualizer from a visualization config.
from mmengine.visualization import Visualizer
# define the visualization configsvisualization_cfg = dict(
name="wandb_visualizer",
vis_backends=[
dict(
type='WandbVisBackend',
init_kwargs=dict(project="mmengine"),
)
],
save_dir="runs/wandb")
# get the visualizer from the visualization configsvisualizer = Visualizer.get_instance(**visualization_cfg)
You pass a dictionary of arguments for [W&B run initialization](/ref/python/init/) input parameters to `init_kwargs`.
Initialize a runner with the visualizer, and call runner.train().
from mmengine.runner import Runner
# build the mmengine Runner which is a training helper for PyTorchrunner = Runner(
model,
work_dir='runs/gan/',
train_dataloader=train_dataloader,
train_cfg=train_cfg,
optim_wrapper=opt_wrapper_dict,
visualizer=visualizer, # pass the visualizer)
# start trainingrunner.train()
Use the WandbVisBackend with OpenMMLab computer vision libraries
The WandbVisBackend can also be used easily to track experiments with OpenMMLab computer vision libraries such as MMDetection.
# inherit base configs from the default runtime configs_base_ = ["../_base_/default_runtime.py"]
# Assign the `WandbVisBackend` config dictionary to the# `vis_backends` of the `visualizer` from the base configs_base_.visualizer.vis_backends = [
dict(
type='WandbVisBackend',
init_kwargs={
'project': 'mmdet',
'entity': 'geekyrakshit' },
),
]
6.21 - MMF
How to integrate W&B with Meta AI’s MMF.
The WandbLogger class in Meta AI’s MMF library will enable Weights & Biases to log the training/validation metrics, system (GPU and CPU) metrics, model checkpoints and configuration parameters.
Current features
The following features are currently supported by the WandbLogger in MMF:
Training & Validation metrics
Learning Rate over time
Model Checkpoint saving to W&B Artifacts
GPU and CPU system metrics
Training configuration parameters
Config parameters
The following options are available in MMF config to enable and customize the wandb logging:
training:
wandb:
enabled: true
# An entity is a username or team name where you're sending runs.
# By default it will log the run to your user account.
entity: null
# Project name to be used while logging the experiment with wandb
project: mmf
# Experiment/ run name to be used while logging the experiment
# under the project with wandb. The default experiment name
# is: ${training.experiment_name}
name: ${training.experiment_name}
# Turn on model checkpointing, saving checkpoints to W&B Artifacts
log_model_checkpoint: true
# Additional argument values that you want to pass to wandb.init().
# Check out the documentation at /ref/python/init
# to see what arguments are available, such as:
# job_type: 'train'
# tags: ['tag1', 'tag2']
env:
# To change the path to the directory where wandb metadata would be
# stored (Default: env.log_dir):
wandb_logdir: ${env:MMF_WANDB_LOGDIR,}
6.22 - MosaicML Composer
State of the art algorithms to train your neural networks
Composer is a library for training neural networks better, faster, and cheaper. It contains many state-of-the-art methods for accelerating neural network training and improving generalization, along with an optional Trainer API that makes composing many different enhancements easy.
W&B provides a lightweight wrapper for logging your ML experiments. But you don’t need to combine the two yourself: W&B is incorporated directly into the Composer library via the WandBLogger.
Start logging to W&B
from composer import Trainer
from composer.loggers import WandBLogger
trainer = Trainer(..., logger=WandBLogger())
Use Composer’s WandBLogger
The Composer library uses WandBLogger class in the Trainer to log metrics to Weights and Biases. It is a simple as instantiating the logger and passing it to the Trainer
Below the parameters for WandbLogger, see the Composer documentation for a full list and description
Parameter
Description
project
W&B project name (str, optional)
group
W&B group name (str, optional)
name
W&B run name. If not specified, the State.run_name is used (str, optional)
entity
W&B entity name, such as your username or W&B Team name (str, optional)
tags
W&B tags (List[str], optional)
log_artifacts
Whether to log checkpoints to wandb, default: false (bool, optional)
rank_zero_only
Whether to log only on the rank-zero process. When logging artifacts, it is highly recommended to log on all ranks. Artifacts from ranks ≥1 are not stored, which may discard pertinent information. For example, when using Deepspeed ZeRO, it would be impossible to restore from checkpoints without artifacts from all ranks, default: True (bool, optional)
init_kwargs
Params to pass to wandb.init such as your wandb config etc See here for the full list wandb.init accepts
A typical usage would be:
init_kwargs = {"notes":"Testing higher learning rate in this experiment",
"config":{"arch":"Llama",
"use_mixed_precision":True
}
}
wandb_logger = WandBLogger(log_artifacts=True, init_kwargs=init_kwargs)
Log prediction samples
You can use Composer’s Callbacks system to control when you log to Weights & Biases via the WandBLogger, in this example a sample of the validation images and predictions is logged:
import wandb
from composer import Callback, State, Logger
classLogPredictions(Callback):
def __init__(self, num_samples=100, seed=1234):
super().__init__()
self.num_samples = num_samples
self.data = []
defeval_batch_end(self, state: State, logger: Logger):
"""Compute predictions per batch and stores them on self.data"""if state.timer.epoch == state.max_duration: #on last val epochif len(self.data) < self.num_samples:
n = self.num_samples
x, y = state.batch_pair
outputs = state.outputs.argmax(-1)
data = [[wandb.Image(x_i), y_i, y_pred] for x_i, y_i, y_pred in list(zip(x[:n], y[:n], outputs[:n]))]
self.data += data
defeval_end(self, state: State, logger: Logger):
"Create a wandb.Table and logs it" columns = ['image', 'ground truth', 'prediction']
table = wandb.Table(columns=columns, data=self.data[:self.num_samples])
wandb.log({'sample_table':table}, step=int(state.timer.batch))
...trainer = Trainer(
... loggers=[WandBLogger()],
callbacks=[LogPredictions()]
)
Use the W&B OpenAI API integration to log requests, responses, token counts and model metadata for all OpenAI models, including fine-tuned models.
See the OpenAI fine-tuning integration to learn how to use W&B to track your fine-tuning experiments, models, and datasets and share your results with your colleagues.
Log your API inputs and outputs you can quickly evaluate the performance of difference prompts, compare different model settings (such as temperature), and track other usage metrics such as token usage.
Install OpenAI Python API library
The W&B autolog integration works with OpenAI version 0.28.1 and below.
To install OpenAI Python API version 0.28.1, run:
pip install openai==0.28.1
Use the OpenAI Python API
1. Import autolog and initialise it
First, import autolog from wandb.integration.openai and initialise it.
import os
import openai
from wandb.integration.openai import autolog
autolog({"project": "gpt5"})
You can optionally pass a dictionary with argument that wandb.init() accepts to autolog. This includes a project name, team name, entity, and more. For more information about wandb.init, see the API Reference Guide.
2. Call the OpenAI API
Each call you make to the OpenAI API is now logged to W&B automatically.
os.environ["OPENAI_API_KEY"] ="XXX"chat_request_kwargs = dict(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers"},
{"role": "user", "content": "Where was it played?"},
],
)
response = openai.ChatCompletion.create(**chat_request_kwargs)
3. View your OpenAI API inputs and responses
Click on the W&B run link generated by autolog in step 1. This redirects you to your project workspace in the W&B App.
Select a run you created to view the trace table, trace timeline and the model architecture of the OpenAI LLM used.
Turn off autolog
W&B recommends that you call disable() to close all W&B processes when you are finished using the OpenAI API.
autolog.disable()
Now your inputs and completions will be logged to W&B, ready for analysis or to be shared with colleagues.
Log your OpenAI GPT-3.5 or GPT-4 model’s fine-tuning metrics and configuration to W&B. Utilize the W&B ecosystem to track your fine-tuning experiments, models, and datasets and share your results with your colleagues.
See the Weights and Biases Integration section in the OpenAI documentation for supplemental information on how to integrate W&B with OpenAI for fine-tuning.
Install or update OpenAI Python API
The W&B OpenAI fine-tuning integration works with OpenAI version 1.0 and above. See the PyPI documentation for the latest version of the OpenAI Python API library.
To install OpenAI Python API, run:
pip install openai
If you already have OpenAI Python API installed, you can update it with:
pip install -U openai
Sync your OpenAI fine-tuning results
Integrate W&B with OpenAI’s fine-tuning API to log your fine-tuning metrics and configuration to W&B. To do this, use the WandbLogger class from the wandb.integration.openai.fine_tuning module.
from wandb.integration.openai.fine_tuning import WandbLogger
# Finetuning logicWandbLogger.sync(fine_tune_job_id=FINETUNE_JOB_ID)
Sync your fine-tunes
Sync your results from your script
from wandb.integration.openai.fine_tuning import WandbLogger
# one line commandWandbLogger.sync()
# passing optional parametersWandbLogger.sync(
fine_tune_job_id=None,
num_fine_tunes=None,
project="OpenAI-Fine-Tune",
entity=None,
overwrite=False,
model_artifact_name="model-metadata",
model_artifact_type="model",
**kwargs_wandb_init
)
Reference
Argument
Description
fine_tune_job_id
This is the OpenAI Fine-Tune ID which you get when you create your fine-tune job using client.fine_tuning.jobs.create. If this argument is None (default), all the OpenAI fine-tune jobs that haven’t already been synced will be synced to W&B.
openai_client
Pass an initialized OpenAI client to sync. If no client is provided, one is initialized by the logger itself. By default it is None.
num_fine_tunes
If no ID is provided, then all the unsynced fine-tunes will be logged to W&B. This argument allows you to select the number of recent fine-tunes to sync. If num_fine_tunes is 5, it selects the 5 most recent fine-tunes.
project
Weights and Biases project name where your fine-tune metrics, models, data, etc. will be logged. By default, the project name is “OpenAI-Fine-Tune.”
entity
W&B Username or team name where you’re sending runs. By default, your default entity is used, which is usually your username.
overwrite
Forces logging and overwrite existing wandb run of the same fine-tune job. By default this is False.
wait_for_job_success
Once an OpenAI fine-tuning job is started it usually takes a bit of time. To ensure that your metrics are logged to W&B as soon as the fine-tune job is finished, this setting will check every 60 seconds for the status of the fine-tune job to change to succeeded. Once the fine-tune job is detected as being successful, the metrics will be synced automatically to W&B. Set to True by default.
model_artifact_name
The name of the model artifact that is logged. Defaults to "model-metadata".
model_artifact_type
The type of the model artifact that is logged. Defaults to "model".
**kwargs_wandb_init
Aany additional argument passed directly to wandb.init()
Dataset Versioning and Visualization
Versioning
The training and validation data that you upload to OpenAI for fine-tuning are automatically logged as W&B Artifacts for easier version control. Below is an view of the training file in Artifacts. Here you can see the W&B run that logged this file, when it was logged, what version of the dataset this is, the metadata, and DAG lineage from the training data to the trained model.
Visualization
The datasets are visualized as W&B Tables, which allows you to explore, search, and interact with the dataset. Check out the training samples visualized using W&B Tables below.
The fine-tuned model and model versioning
OpenAI gives you an id of the fine-tuned model. Since we don’t have access to the model weights, the WandbLogger creates a model_metadata.json file with all the details (hyperparameters, data file ids, etc.) of the model along with the `fine_tuned_model`` id and is logged as a W&B Artifact.
This model (metadata) artifact can further be linked to a model in the W&B Model Registry.
Frequently Asked Questions
How do I share my fine-tune results with my team in W&B?
Log your fine-tune jobs to your team account with:
WandbLogger.sync(entity="YOUR_TEAM_NAME")
How can I organize my runs?
Your W&B runs are automatically organized and can be filtered/sorted based on any configuration parameter such as job type, base model, learning rate, training filename and any other hyper-parameter.
In addition, you can rename your runs, add notes or create tags to group them.
Once you’re satisfied, you can save your workspace and use it to create report, importing data from your runs and saved artifacts (training/validation files).
How can I access my fine-tuned model?
Fine-tuned model ID is logged to W&B as artifacts (model_metadata.json) as well config.
The training and validation data are logged automatically to W&B as artifacts. The metadata including the ID for the fine-tuned model is also logged as artifacts.
You can always control the pipeline using low level wandb APIs like wandb.Artifact, wandb.log, etc. This will allow complete traceability of your data and models.
“The team that has been maintaining Gym since 2021 has moved all future development to Gymnasium, a drop in replacement for Gym (import gymnasium as gym), and Gym will not be receiving any future updates.” (Source)
Since Gym is no longer an actively maintained project, try out our integration with Gymnasium.
If you’re using OpenAI Gym, Weights & Biases automatically logs videos of your environment generated by gym.wrappers.Monitor. Just set the monitor_gym keyword argument to wandb.init to True or call wandb.gym.monitor().
Our gym integration is very light. We simply look at the name of the video file being logged from gym and name it after that or fall back to "videos" if we don’t find a match. If you want more control, you can always just manually log a video.
PaddleDetection is an end-to-end object-detection development kit based on PaddlePaddle. It detects various mainstream objects, segments instances, and tracks and detects keypoints using configurable modules such as network components, data augmentations, and losses.
PaddleDetection now includes a built-in W&B integration which logs all your training and validation metrics, as well as your model checkpoints and their corresponding metadata.
The PaddleDetection WandbLogger logs your training and evaluation metrics to Weights & Biases as well as your model checkpoints while training.
Read a W&B blog post which illustrates how to integrate a YOLOX model with PaddleDetection on a subset of the COCO2017 dataset.
Use PaddleDetection with W&B
Sign up and log in to W&B
Sign up for a free Weights & Biases account, then pip install the wandb library. To login, you’ll need to be signed in to you account at www.wandb.ai. Once signed in you will find your API key on theAuthorize page.
PaddleOCR aims to create multilingual, awesome, leading, and practical OCR tools that help users train better models and apply them into practice implemented in PaddlePaddle. PaddleOCR support a variety of cutting-edge algorithms related to OCR, and developed industrial solution. PaddleOCR now comes with a Weights & Biases integration for logging training and evaluation metrics along with model checkpoints with corresponding metadata.
Example Blog & Colab
Read here to see how to train a model with PaddleOCR on the ICDAR2015 dataset. This also comes with a Google Colab and the corresponding live W&B dashboard is available here. There is also a Chinese version of this blog here: W&B对您的OCR模型进行训练和调试
Use PaddleOCR with Weights & Biases
1. Sign up and Log in to wandb
Sign up for a free account, then from the command line install the wandb library in a Python 3 environment. To login, you’ll need to be signed in to you account at www.wandb.ai, then you will find your API key on theAuthorize page.
pip install wandb
wandb login
!pip install wandb
wandb.login()
2. Add wandb to your config.yml file
PaddleOCR requires configuration variables to be provided using a yaml file. Adding the following snippet at the end of the configuration yaml file will automatically log all training and validation metrics to a W&B dashboard along with model checkpoints:
Global:
use_wandb: True
Any additional, optional arguments that you might like to pass to wandb.init can also be added under the wandb header in the yaml file:
wandb:
project: CoolOCR # (optional) this is the wandb project name
entity: my_team # (optional) if you're using a wandb team, you can pass the team name here
name: MyOCRModel # (optional) this is the name of the wandb run
3. Pass the config.yml file to train.py
The yaml file is then provided as an argument to the training script available in the PaddleOCR repository.
python tools/train.py -c config.yml
Once you run your train.py file with Weights & Biases turned on, a link will be generated to bring you to your W&B dashboard:
Feedback or Issues?
If you have any feedback or issues about the Weights & Biases integration please open an issue on the PaddleOCR GitHub or email support@wandb.com.
6.28 - Prodigy
How to integrate W&B with Prodigy.
Prodigy is an annotation tool for creating training and evaluation data for machine learning models, error analysis, data inspection & cleaning. W&B Tables allow you to log, visualize, analyze, and share datasets (and more!) inside W&B.
The W&B integration with Prodigy adds simple and easy-to-use functionality to upload your Prodigy-annotated dataset directly to W&B for use with Tables.
Run a few lines of code, like these:
import wandb
from wandb.integration.prodigy import upload_dataset
with wandb.init(project="prodigy"):
upload_dataset("news_headlines_ner")
and get visual, interactive, shareable tables like this one:
Quickstart
Use wandb.integration.prodigy.upload_dataset to upload your annotated prodigy dataset directly from the local Prodigy database to W&B in our Table format. For more information on Prodigy, including installation & setup, please refer to the Prodigy documentation.
W&B will automatically try to convert images and named entity fields to wandb.Image and wandb.Htmlrespectively. Extra columns may be added to the resulting table to include these visualizations.
PyTorch is one of the most popular frameworks for deep learning in Python, especially among researchers. W&B provides first class support for PyTorch, from logging gradients to profiling your code on the CPU and GPU.
To automatically log gradients, you can call wandb.watch and pass in your PyTorch model.
import wandb
wandb.init(config=args)
model =...# set up your model# Magicwandb.watch(model, log_freq=100)
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval ==0:
wandb.log({"loss": loss})
If you need to track multiple models in the same script, you can call wandb.watch on each model separately. Reference documentation for this function is here.
Gradients, metrics, and the graph won’t be logged until wandb.log is called after a forward and backward pass.
Log images and media
You can pass PyTorch Tensors with image data into wandb.Image and utilities from torchvision will be used to convert them to images automatically:
images_t =...# generate or load images as PyTorch Tensorswandb.log({"examples": [wandb.Image(im) for im in images_t]})
For more on logging rich media to W&B in PyTorch and other frameworks, check out our media logging guide.
If you also want to include information alongside media, like your model’s predictions or derived metrics, use a wandb.Table.
my_table = wandb.Table()
my_table.add_column("image", images_t)
my_table.add_column("label", labels)
my_table.add_column("class_prediction", predictions_t)
# Log your Table to W&Bwandb.log({"mnist_predictions": my_table})
For more on logging and visualizing datasets and models, check out our guide to W&B Tables.
Profile PyTorch code
W&B integrates directly with PyTorch Kineto’s Tensorboard plugin to provide tools for profiling PyTorch code, inspecting the details of CPU and GPU communication, and identifying bottlenecks and optimizations.
profile_dir ="path/to/run/tbprofile/"profiler = torch.profiler.profile(
schedule=schedule, # see the profiler docs for details on scheduling on_trace_ready=torch.profiler.tensorboard_trace_handler(profile_dir),
with_stack=True,
)
with profiler:
...# run the code you want to profile here# see the profiler docs for detailed usage information# create a wandb Artifactprofile_art = wandb.Artifact("trace", type="profile")
# add the pt.trace.json files to the Artifactprofile_art.add_file(glob.glob(profile_dir +".pt.trace.json"))
# log the artifactprofile_art.save()
The interactive trace viewing tool is based on the Chrome Trace Viewer, which works best with the Chrome browser.
6.30 - PyTorch Geometric
PyTorch Geometric or PyG is one of the most popular libraries for geometric deep learning and W&B works extremely well with it for visualizing graphs and tracking experiments.
Get started
After you have installed pytorch geometric, install the wandb library and login
pip install wandb
wandb login
!pip install wandb
import wandb
wandb.login()
Visualize the graphs
You can save details about the input graphs including number of edges, number of nodes and more. W&B supports logging plotly charts and HTML panels so any visualizations you create for your graph can then also be logged to W&B.
Use PyVis
The following snippet shows how you could do that with PyVis and HTML.
from pyvis.network import Network
Import wandb
wandb.init(project=’graph_vis’)
net = Network(height="750px", width="100%", bgcolor="#222222", font_color="white")
# Add the edges from the PyG graph to the PyVis networkfor e in tqdm(g.edge_index.T):
src = e[0].item()
dst = e[1].item()
net.add_node(dst)
net.add_node(src)
net.add_edge(src, dst, value=0.1)
# Save the PyVis visualisation to a HTML filenet.show("graph.html")
wandb.log({"eda/graph": wandb.Html("graph.html")})
wandb.finish()
Use Plotly
To use plotly to create a graph visualization, first you need to convert the PyG graph to a networkx object. Following this you will need to create Plotly scatter plots for both nodes and edges. The snippet below can be used for this task.
You can use W&B to track your experiments and related metrics, such as loss functions, accuracy, and more. Add the following line to your training loop:
torchtune is a PyTorch-based library designed to streamline the authoring, fine-tuning, and experimentation processes for large language models (LLMs). Additionally, torchtune has built-in support for logging with W&B, enhancing tracking and visualization of training processes.
Enable W&B logging on the recipe’s config file by modifying the metric_logger section. Change the _component_ to torchtune.utils.metric_logging.WandBLogger class. You can also pass a project name and log_every_n_steps to customize the logging behavior.
You can also pass any other kwargs as you would to the wandb.init method. For example, if you are working on a team, you can pass the entity argument to the WandBLogger class to specify the team name.
You can explore the W&B dashboard to see the logged metrics. By default W&B logs all of the hyperparameters from the config file and the launch overrides.
W&B captures the resolved config on the Overview tab. W&B also stores the config in YAML format on the Files tab.
Logged Metrics
Each recipe has its own training loop. Check each individual recipe to see its logged metrics, which include these by default:
Metric
Description
loss
The loss of the model
lr
The learning rate
tokens_per_second
The tokens per second of the model
grad_norm
The gradient norm of the model
global_step
Corresponds to the current step in the training loop. Takes into account gradient accumulation, basically every time an optimizer step is taken, the model is updated, the gradients are accumulated and the model is updated once every gradient_accumulation_steps
global_step is not the same as the number of training steps. It corresponds to the current step in the training loop. Takes into account gradient accumulation, basically every time an optimizer step is taken the global_step is incremented by 1. For example, if the dataloader has 10 batches, gradient accumulation steps is 2 and run for 3 epochs, the optimizer will step 15 times, in this case global_step will range from 1 to 15.
The streamlined design of torchtune allows to easily add custom metrics or modify the existing ones. It suffices to modify the corresponding recipe file, for example, computing one could log current_epoch as a percentage of the total number of epochs as following:
# inside `train.py` function in the recipe fileself._metric_logger.log_dict(
{"current_epoch": self.epochs * self.global_step / self._steps_per_epoch},
step=self.global_step,
)
This is a fast evolving library, the current metrics are subject to change. If you want to add a custom metric, you should modify the recipe and call the corresponding self._metric_logger.* function.
Save and load checkpoints
The torchtune library supports various checkpoint formats. Depending on the origin of the model you are using, you should switch to the appropriate checkpointer class.
If you want to save the model checkpoints to W&B Artifacts, the simplest solution is to override the save_checkpoint functions inside the corresponding recipe.
Here is an example of how you can override the save_checkpoint function to save the model checkpoints to W&B Artifacts.
defsave_checkpoint(self, epoch: int) ->None:
...## Let's save the checkpoint to W&B## depending on the Checkpointer Class the file will be named differently## Here is an example for the full_finetune case checkpoint_file = Path.joinpath(
self._checkpointer._output_dir, f"torchtune_model_{epoch}" ).with_suffix(".pt")
wandb_artifact = wandb.Artifact(
name=f"torchtune_model_{epoch}",
type="model",
# description of the model checkpoint description="Model checkpoint",
# you can add whatever metadata you want as a dict metadata={
utils.SEED_KEY: self.seed,
utils.EPOCHS_KEY: self.epochs_run,
utils.TOTAL_EPOCHS_KEY: self.total_epochs,
utils.MAX_STEPS_KEY: self.max_steps_per_epoch,
},
)
wandb_artifact.add_file(checkpoint_file)
wandb.log_artifact(wandb_artifact)
Ignite supports Weights & Biases handler to log metrics, model/optimizer parameters, gradients during training and validation. It can also be used to log model checkpoints to the Weights & Biases cloud. This class is also a wrapper for the wandb module. This means that you can call any wandb function using this wrapper. See examples on how to save model parameters and gradients.
Basic setup
from argparse import ArgumentParser
import wandb
import torch
from torch import nn
from torch.optim import SGD
from torch.utils.data import DataLoader
import torch.nn.functional as F
from torchvision.transforms import Compose, ToTensor, Normalize
from torchvision.datasets import MNIST
from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss
from tqdm import tqdm
classNet(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
defforward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=-1)
defget_data_loaders(train_batch_size, val_batch_size):
data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])
train_loader = DataLoader(MNIST(download=True, root=".", transform=data_transform, train=True),
batch_size=train_batch_size, shuffle=True)
val_loader = DataLoader(MNIST(download=False, root=".", transform=data_transform, train=False),
batch_size=val_batch_size, shuffle=False)
return train_loader, val_loader
Using WandBLogger in ignite is a modular process. First, you create a WandBLogger object. Next, you attach it to a trainer or evaluator to automatically log the metrics. This example:
Logs training loss, attached to the trainer object.
PyTorch Lightning provides a lightweight wrapper for organizing your PyTorch code and easily adding advanced features such as distributed training and 16-bit precision. W&B provides a lightweight wrapper for logging your ML experiments. But you don’t need to combine the two yourself: Weights & Biases is incorporated directly into the PyTorch Lightning library via the WandbLogger.
Integrate with Lightning
from lightning.pytorch.loggers import WandbLogger
from lightning.pytorch import Trainer
wandb_logger = WandbLogger(log_model="all")
trainer = Trainer(logger=wandb_logger)
Using wandb.log(): The WandbLogger logs to W&B using the Trainer’s global_step. If you make additional calls to wandb.log directly in your code, do not use the step argument in wandb.log().
Instead, log the Trainer’s global_step like your other metrics:
# add one parameterwandb_logger.experiment.config["key"] = value
# add multiple parameterswandb_logger.experiment.config.update({key1: val1, key2: val2})
# use directly wandb modulewandb.config["key"] = value
wandb.config.update()
Log gradients, parameter histogram and model topology
You can pass your model object to wandblogger.watch() to monitor your models’s gradients and parameters as you train. See the PyTorch Lightning WandbLogger documentation
Log metrics
You can log your metrics to W&B when using the WandbLogger by calling self.log('my_metric_name', metric_vale) within your LightningModule, such as in your training_step or validation_step methods.
The code snippet below shows how to define your LightningModule to log your metrics and your LightningModule hyperparameters. This example uses the torchmetrics library to calculate your metrics
import torch
from torch.nn import Linear, CrossEntropyLoss, functional as F
from torch.optim import Adam
from torchmetrics.functional import accuracy
from lightning.pytorch import LightningModule
classMy_LitModule(LightningModule):
def __init__(self, n_classes=10, n_layer_1=128, n_layer_2=256, lr=1e-3):
"""method used to define the model parameters""" super().__init__()
# mnist images are (1, 28, 28) (channels, width, height) self.layer_1 = Linear(28*28, n_layer_1)
self.layer_2 = Linear(n_layer_1, n_layer_2)
self.layer_3 = Linear(n_layer_2, n_classes)
self.loss = CrossEntropyLoss()
self.lr = lr
# save hyper-parameters to self.hparams (auto-logged by W&B) self.save_hyperparameters()
defforward(self, x):
"""method used for inference input -> output"""# (b, 1, 28, 28) -> (b, 1*28*28) batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
# let's do 3 x (linear + relu) x = F.relu(self.layer_1(x))
x = F.relu(self.layer_2(x))
x = self.layer_3(x)
return x
deftraining_step(self, batch, batch_idx):
"""needs to return a loss from a single batch""" _, loss, acc = self._get_preds_loss_accuracy(batch)
# Log loss and metric self.log("train_loss", loss)
self.log("train_accuracy", acc)
return loss
defvalidation_step(self, batch, batch_idx):
"""used for logging metrics""" preds, loss, acc = self._get_preds_loss_accuracy(batch)
# Log loss and metric self.log("val_loss", loss)
self.log("val_accuracy", acc)
return preds
defconfigure_optimizers(self):
"""defines model optimizer"""return Adam(self.parameters(), lr=self.lr)
def_get_preds_loss_accuracy(self, batch):
"""convenience function since train/valid/test steps are similar""" x, y = batch
logits = self(x)
preds = torch.argmax(logits, dim=1)
loss = self.loss(logits, y)
acc = accuracy(preds, y)
return preds, loss, acc
import lightning as L
import torch
import torchvision as tv
from wandb.integration.lightning.fabric import WandbLogger
import wandb
fabric = L.Fabric(loggers=[wandb_logger])
fabric.launch()
model = tv.models.resnet18()
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
model, optimizer = fabric.setup(model, optimizer)
train_dataloader = fabric.setup_dataloaders(
torch.utils.data.DataLoader(train_dataset, batch_size=batch_size)
)
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
optimizer.zero_grad()
loss = model(batch)
loss.backward()
optimizer.step()
fabric.log_dict({"loss": loss})
Log the min/max of a metric
Using wandb’s define_metric function you can define whether you’d like your W&B summary metric to display the min, max, mean or best value for that metric. If define_metric _ isn’t used, then the last value logged with appear in your summary metrics. See the define_metricreference docs here and the guide here for more.
To tell W&B to keep track of the max validation accuracy in the W&B summary metric, call wandb.define_metric only once, at the beginning of training:
classMy_LitModule(LightningModule):
...defvalidation_step(self, batch, batch_idx):
if trainer.global_step ==0:
wandb.define_metric("val_accuracy", summary="max")
preds, loss, acc = self._get_preds_loss_accuracy(batch)
# Log loss and metric self.log("val_loss", loss)
self.log("val_accuracy", acc)
return preds
The latest and best aliases are automatically set to easily retrieve a model checkpoint from a W&B Artifact:
# reference can be retrieved in artifacts panel# "VERSION" can be a version (ex: "v2") or an alias ("latest or "best")checkpoint_reference ="USER/PROJECT/MODEL-RUN_ID:VERSION"
# download checkpoint locally (if not already cached)wandb_logger.download_artifact(checkpoint_reference, artifact_type="model")
# Request the raw checkpointfull_checkpoint = fabric.load(Path(artifact_dir) /"model.ckpt")
model.load_state_dict(full_checkpoint["model"])
optimizer.load_state_dict(full_checkpoint["optimizer"])
The model checkpoints you log are viewable through the W&B Artifacts UI, and include the full model lineage (see an example model checkpoint in the UI here).
To bookmark your best model checkpoints and centralize them across your team, you can link them to the W&B Model Registry.
Here you can organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and automate downstream actions with webhooks or jobs.
Log images, text, and more
The WandbLogger has log_image, log_text and log_table methods for logging media.
You can also directly call wandb.log or trainer.logger.experiment.log to log other media types such as Audio, Molecules, Point Clouds, 3D Objects and more.
# using tensors, numpy arrays or PIL imageswandb_logger.log_image(key="samples", images=[img1, img2])
# adding captionswandb_logger.log_image(key="samples", images=[img1, img2], caption=["tree", "person"])
# using file pathwandb_logger.log_image(key="samples", images=["img_1.jpg", "img_2.jpg"])
# using .log in the trainertrainer.logger.experiment.log(
{"samples": [wandb.Image(img, caption=caption) for (img, caption) in my_images]},
step=current_trainer_global_step,
)
# data should be a list of listscolumns = ["input", "label", "prediction"]
my_data = [["cheese", "english", "english"], ["fromage", "french", "spanish"]]
# using columns and datawandb_logger.log_text(key="my_samples", columns=columns, data=my_data)
# using a pandas DataFramewandb_logger.log_text(key="my_samples", dataframe=my_dataframe)
# log a W&B Table that has a text caption, an image and audiocolumns = ["caption", "image", "sound"]
# data should be a list of listsmy_data = [
["cheese", wandb.Image(img_1), wandb.Audio(snd_1)],
["wine", wandb.Image(img_2), wandb.Audio(snd_2)],
]
# log the Tablewandb_logger.log_table(key="my_samples", columns=columns, data=data)
You can use Lightning’s Callbacks system to control when you log to Weights & Biases via the WandbLogger, in this example we log a sample of our validation images and predictions:
import torch
import wandb
import lightning.pytorch as pl
from lightning.pytorch.loggers import WandbLogger
# or# from wandb.integration.lightning.fabric import WandbLoggerclassLogPredictionSamplesCallback(Callback):
defon_validation_batch_end(
self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx
):
"""Called when the validation batch ends."""# `outputs` comes from `LightningModule.validation_step`# which corresponds to our model predictions in this case# Let's log 20 sample image predictions from the first batchif batch_idx ==0:
n =20 x, y = batch
images = [img for img in x[:n]]
captions = [
f"Ground Truth: {y_i} - Prediction: {y_pred}"for y_i, y_pred in zip(y[:n], outputs[:n])
]
# Option 1: log images with `WandbLogger.log_image` wandb_logger.log_image(key="sample_images", images=images, caption=captions)
# Option 2: log images and predictions as a W&B Table columns = ["image", "ground truth", "prediction"]
data = [
[wandb.Image(x_i), y_i, y_pred] or x_i,
y_i,
y_pred in list(zip(x[:n], y[:n], outputs[:n])),
]
wandb_logger.log_table(key="sample_table", columns=columns, data=data)
trainer = pl.Trainer(callbacks=[LogPredictionSamplesCallback()])
Use multiple GPUs with Lightning and W&B
PyTorch Lightning has Multi-GPU support through their DDP Interface. However, PyTorch Lightning’s design requires you to be careful about how you instantiate our GPUs.
Lightning assumes that each GPU (or Rank) in your training loop must be instantiated in exactly the same way - with the same initial conditions. However, only rank 0 process gets access to the wandb.run object, and for non-zero rank processes: wandb.run = None. This could cause your non-zero processes to fail. Such a situation can put you in a deadlock because rank 0 process will wait for the non-zero rank processes to join, which have already crashed.
For this reason, be careful about how we set up your training code. The recommended way to set it up would be to have your code be independent of the wandb.run object.
classMNISTClassifier(pl.LightningModule):
def __init__(self):
super(MNISTClassifier, self).__init__()
self.model = nn.Sequential(
nn.Flatten(),
nn.Linear(28*28, 128),
nn.ReLU(),
nn.Linear(128, 10),
)
self.loss = nn.CrossEntropyLoss()
defforward(self, x):
return self.model(x)
deftraining_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
self.log("train/loss", loss)
return {"train_loss": loss}
defvalidation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
self.log("val/loss", loss)
return {"val_loss": loss}
defconfigure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
defmain():
# Setting all the random seeds to the same value.# This is important in a distributed training setting.# Each rank will get its own set of initial weights.# If they don't match up, the gradients will not match either,# leading to training that may not converge. pl.seed_everything(1)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=4)
val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False, num_workers=4)
model = MNISTClassifier()
wandb_logger = WandbLogger(project="<project_name>")
callbacks = [
ModelCheckpoint(
dirpath="checkpoints",
every_n_train_steps=100,
),
]
trainer = pl.Trainer(
max_epochs=3, gpus=2, logger=wandb_logger, strategy="ddp", callbacks=callbacks
)
trainer.fit(model, train_loader, val_loader)
Examples
You can follow along in a video tutorial with a Colab here.
Frequently Asked Questions
How does W&B integrate with Lightning?
The core integration is based on the Lightning loggers API, which lets you write much of your logging code in a framework-agnostic way. Loggers are passed to the Lightning Trainer and are triggered based on that API’s rich hook-and-callback system. This keeps your research code well-separated from engineering and logging code.
What does the integration log without any additional code?
We’ll save your model checkpoints to W&B, where you can view them or download them for use in future runs. We’ll also capture system metrics, like GPU usage and network I/O, environment information, like hardware and OS information, code state (including git commit and diff patch, notebook contents and session history), and anything printed to the standard out.
What if I need to use wandb.run in my training setup?
You need to expand the scope of the variable you need to access yourself. In other words, make sure that the initial conditions are the same on all processes.
if os.environ.get("LOCAL_RANK", None) isNone:
os.environ["WANDB_DIR"] = wandb.run.dir
If they are, you can use os.environ["WANDB_DIR"] to set up the model checkpoints directory. This way, any non-zero rank process can access wandb.run.dir.
6.34 - Ray Tune
How to integrate W&B with Ray Tune.
W&B integrates with Ray by offering two lightweight integrations.
TheWandbLoggerCallback function automatically logs metrics reported to Tune to the Wandb API.
The setup_wandb() function, which can be used with the function API, automatically initializes the Wandb API with Tune’s training information. You can use the Wandb API as usual. such as by using wandb.log() to log your training process.
Configure the integration
from ray.air.integrations.wandb import WandbLoggerCallback
Wandb configuration is done by passing a wandb key to the config parameter of tune.run() (see example below).
The content of the wandb config entry is passed to wandb.init() as keyword arguments. The exception are the following settings, which are used to configure the WandbLoggerCallback itself:
Parameters
project (str): Name of the Wandb project. Mandatory.
api_key_file (str): Path to file containing the Wandb API KEY.
api_key (str): Wandb API Key. Alternative to setting api_key_file.
excludes (list): List of metrics to exclude from the log.
log_config (bool): Whether to log the config parameter of the results dictionary. Defaults to False.
upload_checkpoints (bool): If True, model checkpoints are uploaded as artifacts. Defaults to False.
Example
from ray import tune, train
from ray.air.integrations.wandb import WandbLoggerCallback
deftrain_fc(config):
for i in range(10):
train.report({"mean_accuracy": (i + config["alpha"]) /10})
tuner = tune.Tuner(
train_fc,
param_space={
"alpha": tune.grid_search([0.1, 0.2, 0.3]),
"beta": tune.uniform(0.5, 1.0),
},
run_config=train.RunConfig(
callbacks=[
WandbLoggerCallback(
project="<your-project>", api_key="<your-api-key>", log_config=True )
]
),
)
results = tuner.fit()
setup_wandb
from ray.air.integrations.wandb import setup_wandb
This utility function helps initialize Wandb for use with Ray Tune. For basic usage, call setup_wandb() in your training function:
from ray.air.integrations.wandb import setup_wandb
deftrain_fn(config):
# Initialize wandb wandb = setup_wandb(config)
for i in range(10):
loss = config["a"] + config["b"]
wandb.log({"loss": loss})
tune.report(loss=loss)
tuner = tune.Tuner(
train_fn,
param_space={
# define search space here"a": tune.choice([1, 2, 3]),
"b": tune.choice([4, 5, 6]),
# wandb configuration"wandb": {"project": "Optimization_Project", "api_key_file": "/path/to/file"},
},
)
results = tuner.fit()
Example Code
We’ve created a few examples for you to see how the integration works:
Dashboard: View dashboard generated from the example.
6.35 - SageMaker
How to integrate W&B with Amazon SageMaker.
W&B integrates with Amazon SageMaker, automatically reading hyperparameters, grouping distributed runs, and resuming runs from checkpoints.
Authentication
W&B looks for a file named secrets.env relative to the training script and loads them into the environment when wandb.init() is called. You can generate a secrets.env file by calling wandb.sagemaker_auth(path="source_dir") in the script you use to launch your experiments. Be sure to add this file to your .gitignore!
Existing estimators
If you’re using one of SageMakers preconfigured estimators you need to add a requirements.txt to your source directory that includes wandb
wandb
If you’re using an estimator that’s running Python 2, you’ll need to install psutil directly from this wheel before installing wandb:
Review a complete example on GitHub, and read more on our blog.
You can also read the tutorial on deploying a sentiment analyzer using SageMaker and W&B.
The W&B sweep agent will only behave as expected in a SageMaker job if our SageMaker integration is turned off. You can turn off the SageMaker integration in your runs by modifying your invocation of wandb.init as follows:
If you are using Weights and Biases for the first time,check out a quickstart
pip install wandb
wandb login
!pip install wandb
wandb.login()
Log metrics
import wandb
wandb.init(project="visualize-sklearn")
y_pred = clf.predict(X_test)
accuracy = sklearn.metrics.accuracy_score(y_true, y_pred)
# If logging metrics over time, then use wandb.logwandb.log({"accuracy": accuracy})
# OR to log a final metric at the end of training you can also use wandb.summarywandb.summary["accuracy"] = accuracy
After training a model and making predictions you can then generate plots in wandb to analyze your predictions. See the Supported Plots section below for a full list of supported charts
# Visualize single plotwandb.sklearn.plot_confusion_matrix(y_true, y_pred, labels)
All plots
W&B has functions such as plot_classifier that will plot several relevant plots:
Plots created on Matplotlib can also be logged on W&B dashboard. To do that, it is first required to install plotly.
pip install plotly
Finally, the plots can be logged on W&B’s dashboard as follows:
import matplotlib.pyplot as plt
import wandb
wandb.init(project="visualize-sklearn")
# do all the plt.plot(), plt.scatter(), etc. here.# ...# instead of doing plt.show() do:wandb.log({"plot": plt})
Supported plots
Learning curve
Trains model on datasets of varying lengths and generates a plot of cross validated scores vs dataset size, for both training and test sets.
wandb.sklearn.plot_learning_curve(model, X, y)
model (clf or reg): Takes in a fitted regressor or classifier.
X (arr): Dataset features.
y (arr): Dataset labels.
ROC
ROC curves plot true positive rate (y-axis) vs false positive rate (x-axis). The ideal score is a TPR = 1 and FPR = 0, which is the point on the top left. Typically we calculate the area under the ROC curve (AUC-ROC), and the greater the AUC-ROC the better.
wandb.sklearn.plot_roc(y_true, y_probas, labels)
y_true (arr): Test set labels.
y_probas (arr): Test set predicted probabilities.
labels (list): Named labels for target variable (y).
Class proportions
Plots the distribution of target classes in training and test sets. Useful for detecting imbalanced classes and ensuring that one class doesn’t have a disproportionate influence on the model.
labels (list): Named labels for target variable (y).
Precision recall curve
Computes the tradeoff between precision and recall for different thresholds. A high area under the curve represents both high recall and high precision, where high precision relates to a low false positive rate, and high recall relates to a low false negative rate.
High scores for both show that the classifier is returning accurate results (high precision), as well as returning a majority of all positive results (high recall). PR curve is useful when the classes are very imbalanced.
labels (list): Named labels for target variable (y).
Feature importances
Evaluates and plots the importance of each feature for the classification task. Only works with classifiers that have a feature_importances_ attribute, like trees.
feature_names (list): Names for features. Makes plots easier to read by replacing feature indexes with corresponding names.
Calibration curve
Plots how well calibrated the predicted probabilities of a classifier are and how to calibrate an uncalibrated classifier. Compares estimated predicted probabilities by a baseline logistic regression model, the model passed as an argument, and by both its isotonic calibration and sigmoid calibrations.
The closer the calibration curves are to a diagonal the better. A transposed sigmoid like curve represents an overfitted classifier, while a sigmoid like curve represents an underfitted classifier. By training isotonic and sigmoid calibrations of the model and comparing their curves we can figure out whether the model is over or underfitting and if so which calibration (sigmoid or isotonic) might help fix this.
wandb.sklearn.plot_calibration_curve(clf, X, y, 'RandomForestClassifier')
model (clf): Takes in a fitted classifier.
X (arr): Training set features.
y (arr): Training set labels.
model_name (str): Model name. Defaults to ‘Classifier’
Confusion matrix
Computes the confusion matrix to evaluate the accuracy of a classification. It’s useful for assessing the quality of model predictions and finding patterns in the predictions the model gets wrong. The diagonal represents the predictions the model got right, such as where the actual label is equal to the predicted label.
model (clf or reg): Takes in a fitted regressor or classifier.
X (arr): Training set features.
y (arr): Training set labels.
X_test (arr): Test set features.
y_test (arr): Test set labels.
Elbow plot
Measures and plots the percentage of variance explained as a function of the number of clusters, along with training times. Useful in picking the optimal number of clusters.
wandb.sklearn.plot_elbow_curve(model, X_train)
model (clusterer): Takes in a fitted clusterer.
X (arr): Training set features.
Silhouette plot
Measures & plots how close each point in one cluster is to points in the neighboring clusters. The thickness of the clusters corresponds to the cluster size. The vertical line represents the average silhouette score of all the points.
Silhouette coefficients near +1 indicate that the sample is far away from the neighboring clusters. A value of 0 indicates that the sample is on or very close to the decision boundary between two neighboring clusters and negative values indicate that those samples might have been assigned to the wrong cluster.
In general we want all silhouette cluster scores to be above average (past the red line) and as close to 1 as possible. We also prefer cluster sizes that reflect the underlying patterns in the data.
cluster_labels (list): Names for cluster labels. Makes plots easier to read by replacing cluster indexes with corresponding names.
Outlier candidates plot
Measures a datapoint’s influence on regression model via cook’s distance. Instances with heavily skewed influences could potentially be outliers. Useful for outlier detection.
Measures and plots the predicted target values (y-axis) vs the difference between actual and predicted target values (x-axis), as well as the distribution of the residual error.
Generally, the residuals of a well-fit model should be randomly distributed because good models will account for most phenomena in a data set, except for random error.
wandb.sklearn.plot_residuals(model, X, y)
model (regressor): Takes in a fitted classifier.
X (arr): Training set features.
y (arr): Training set labels.
If you have any questions, we’d love to answer them in our slack community.
Example
Run in colab: A simple notebook to get you started
6.37 - Simple Transformers
How to integrate W&B with the Transformers library by Hugging Face.
This library is based on the Transformers library by Hugging Face. Simple Transformers lets you quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize a model, train the model, and evaluate a model. It supports Sequence Classification, Token Classification (NER),Question Answering,Language Model Fine-Tuning, Language Model Training, Language Generation, T5 Model, Seq2Seq Tasks , Multi-Modal Classification and Conversational AI.
To use Weights and Biases for visualizing model training. To use this, set a project name for W&B in the wandb_project attribute of the args dictionary. This logs all hyperparameter values, training losses, and evaluation metrics to the given project.
model = ClassificationModel('roberta', 'roberta-base', args={'wandb_project': 'project-name'})
Any additional arguments that go into wandb.init can be passed as wandb_kwargs.
Structure
The library is designed to have a separate class for every NLP task. The classes that provide similar functionality are grouped together.
simpletransformers.classification - Includes all Classification models.
ClassificationModel
MultiLabelClassificationModel
simpletransformers.ner - Includes all Named Entity Recognition models.
NERModel
simpletransformers.question_answering - Includes all Question Answering models.
QuestionAnsweringModel
Here are some minimal examples
MultiLabel Classification
model = MultiLabelClassificationModel("distilbert","distilbert-base-uncased",num_labels=6,
args={"reprocess_input_data": True, "overwrite_output_dir": True, "num_train_epochs":epochs,'learning_rate':learning_rate,
'wandb_project': "simpletransformers"},
)
# Train the model
model.train_model(train_df)
# Evaluate the model
result, model_outputs, wrong_predictions = model.eval_model(eval_df)
SimpleTransformers provides classes as well as training scripts for all common natural language tasks. Here is the complete list of global arguments that are supported by the library, with their default arguments.
You can use Weights & Biases with Skorch to automatically log the model with the best performance, along with all model performance metrics, the model topology and compute resources after each epoch. Every file saved in wandb_run.dir is automatically logged to W&B servers.
Whether to save a checkpoint of the best model and upload it to your Run on W&B servers.
keys_ignored
str or list of str (default=None)
Key or list of keys that should not be logged to tensorboard. Note that in addition to the keys provided by the user, keys such as those starting with event_ or ending on _best are ignored by default.
Example Code
We’ve created a few examples for you to see how the integration works:
# Install wandb... pip install wandb
import wandb
from skorch.callbacks import WandbLogger
# Create a wandb Runwandb_run = wandb.init()
# Alternative: Create a wandb Run without a W&B accountwandb_run = wandb.init(anonymous="allow")
# Log hyper-parameters (optional)wandb_run.config.update({"learning rate": 1e-3, "batch size": 32})
net = NeuralNet(..., callbacks=[WandbLogger(wandb_run)])
net.fit(X, y)
Method reference
Method
Description
initialize()
(Re-)Set the initial state of the callback.
on_batch_begin(net[, X, y, training])
Called at the beginning of each batch.
on_batch_end(net[, X, y, training])
Called at the end of each batch.
on_epoch_begin(net[, dataset_train, …])
Called at the beginning of each epoch.
on_epoch_end(net, **kwargs)
Log values from the last history step and save best model
on_grad_computed(net, named_parameters[, X, …])
Called once per batch after gradients have been computed but before an update step was performed.
on_train_begin(net, **kwargs)
Log model topology and add a hook for gradients
on_train_end(net[, X, y])
Called at the end of training.
6.39 - spaCy
spaCy is a popular “industrial-strength” NLP library: fast, accurate models with a minimum of fuss. As of spaCy v3, Weights and Biases can now be used with spacy train to track your spaCy model’s training metrics as well as to save and version your models and datasets. And all it takes is a few added lines in your configuration.
1. Install the wandb library and log in
pip install wandb
wandb login
!pip install wandb
import wandb
wandb.login()
2. Add the WandbLogger to your spaCy config file
spaCy config files are used to specify all aspects of training, not just logging – GPU allocation, optimizer choice, dataset paths, and more. Minimally, under [training.logger] you need to provide the key @loggers with the value "spacy.WandbLogger.v3", plus a project_name.
For more on how spaCy training config files work and on other options you can pass in to customize training, check out spaCy’s documentation.
str. The name of the W&B Project. The project will be created automatically if it doesn’t exist yet.
remove_config_values
List[str] . A list of values to exclude from the config before it is uploaded to W&B. [] by default.
model_log_interval
Optional int. None by default. If set, model versioning with Artifactswill be enabled. Pass in the number of steps to wait between logging model checkpoints. None by default.
log_dataset_dir
Optional str. If passed a path, the dataset will be uploaded as an Artifact at the beginning of training. None by default.
entity
Optional str . If passed, the run will be created in the specified entity
run_name
Optional str . If specified, the run will be created with the specified name.
3. Start training
Once you have added the WandbLogger to your spaCy training config you can run spacy train as usual.
When training begins, a link to your training run’s W&B page will be output which will take you to this run’s experiment tracking dashboard in the Weights & Biases web UI.
6.40 - Stable Baselines 3
How to integrate W&B with Stable Baseline 3.
Stable Baselines 3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch. W&B’s SB3 integration:
Records metrics such as losses and episodic returns.
W&B support embedded TensorBoard for W&B Multi-tenant SaaS.
Upload your TensorBoard logs to the cloud, quickly share your results among colleagues and classmates and keep your analysis in one centralized location.
Get started
import wandb
# Start a wandb run with `sync_tensorboard=True`wandb.init(project="my-project", sync_tensorboard=True)
# Your training code using TensorBoard...# [Optional]Finish the wandb run to upload the tensorboard logs to W&B (if running in Notebook)wandb.finish()
Once your run finishes, you can access your TensorBoard event files in W&B and you can visualize your metrics in native W&B charts, together with additional useful information like the system’s CPU or GPU utilization, the git state, the terminal command the run used, and more.
W&B supports TensorBoard with all versions of TensorFlow. W&B also supports TensorBoard 1.14 and higher with PyTorch as well as TensorBoardX.
Frequently asked questions
How can I log metrics to W&B that aren’t logged to TensorBoard?
If you need to log additional custom metrics that aren’t being logged to TensorBoard, you can call wandb.log in your code wandb.log({"custom": 0.8})
Setting the step argument in wandb.log is turned off when syncing Tensorboard. If you’d like to set a different step count, you can log the metrics with a step metric as:
How do I configure Tensorboard when I’m using it with wandb?
If you want more control over how TensorBoard is patched you can call wandb.tensorboard.patch instead of passing sync_tensorboard=True to wandb.init.
import wandb
wandb.tensorboard.patch(root_logdir="<logging_directory>")
wandb.init()
# Finish the wandb run to upload the tensorboard logs to W&B (if running in Notebook)wandb.finish()
You can pass tensorboard_x=False to this method to ensure vanilla TensorBoard is patched, if you’re using TensorBoard > 1.14 with PyTorch you can pass pytorch=True to ensure it’s patched. Both of these options have smart defaults depending on what versions of these libraries have been imported.
By default, we also sync the tfevents files and any .pbtxt files. This enables us to launch a TensorBoard instance on your behalf. You will see a TensorBoard tab on the run page. This behavior can be turned off by passing save=False to wandb.tensorboard.patch
import wandb
wandb.init()
wandb.tensorboard.patch(save=False, tensorboard_x=True)
# If running in a notebook, finish the wandb run to upload the tensorboard logs to W&Bwandb.finish()
You must call either wandb.init or wandb.tensorboard.patchbefore calling tf.summary.create_file_writer or constructing a SummaryWriter via torch.utils.tensorboard.
How do I sync historical TensorBoard runs?
If you have existing tfevents files stored locally and you would like to import them into W&B, you can run wandb sync log_dir, where log_dir is a local directory containing the tfevents files.
How do I use Google Colab or Jupyter with TensorBoard?
If running your code in a Jupyter or Colab notebook, make sure to call wandb.finish() and the end of your training. This will finish the wandb run and upload the tensorboard logs to W&B so they can be visualized. This is not necessary when running a .py script as wandb finishes automatically when a script finishes.
To run shell commands in a notebook environment, you must prepend a !, as in !wandb sync directoryname.
How do I use PyTorch with TensorBoard?
If you use PyTorch’s TensorBoard integration, you may need to manually upload the PyTorch Profiler JSON file.
If you’re already using TensorBoard, it’s easy to integrate with wandb.
import tensorflow as tf
import wandb
wandb.init(config=tf.flags.FLAGS, sync_tensorboard=True)
Log custom metrics
If you need to log additional custom metrics that aren’t being logged to TensorBoard, you can call wandb.log in your code wandb.log({"custom": 0.8})
Setting the step argument in wandb.log is turned off when syncing Tensorboard. If you’d like to set a different step count, you can log the metrics with a step metric as:
If you want more control over what gets logged, wandb also provides a hook for TensorFlow estimators. It will log all tf.summary values in the graph.
import tensorflow as tf
import wandb
wandb.init(config=tf.FLAGS)
estimator.train(hooks=[wandb.tensorflow.WandbHook(steps_per_log=1000)])
Log manually
The simplest way to log metrics in TensorFlow is by logging tf.summary with the TensorFlow logger:
import wandb
with tf.Session() as sess:
# ... wandb.tensorflow.log(tf.summary.merge_all())
With TensorFlow 2, the recommended way of training a model with a custom loop is via using tf.GradientTape. You can read more about it here. If you want to incorporate wandb to log metrics in your custom TensorFlow training loops you can follow this snippet:
with tf.GradientTape() as tape:
# Get the probabilities predictions = model(features)
# Calculate the loss loss = loss_func(labels, predictions)
# Log your metrics wandb.log("loss": loss.numpy())
# Get the gradients gradients = tape.gradient(loss, model.trainable_variables)
# Update the weights optimizer.apply_gradients(zip(gradients, model.trainable_variables))
When the cofounders started working on W&B, they were inspired to build a tool for the frustrated TensorBoard users at OpenAI. Here are a few things we’ve focused on improving:
Reproduce models: Weights & Biases is good for experimentation, exploration, and reproducing models later. We capture not just the metrics, but also the hyperparameters and version of the code, and we can save your version-control status and model checkpoints for you so your project is reproducible.
Automatic organization: Whether you’re picking up a project from a collaborator, coming back from a vacation, or dusting off an old project, W&B makes it easy to see all the models that have been tried so no one wastes hours, GPU cycles, or carbon re-running experiments.
Fast, flexible integration: Add W&B to your project in 5 minutes. Install our free open-source Python package and add a couple of lines to your code, and every time you run your model you’ll have nice logged metrics and records.
Persistent, centralized dashboard: No matter where you train your models, whether on your local machine, in a shared lab cluster, or on spot instances in the cloud, your results are shared to the same centralized dashboard. You don’t need to spend your time copying and organizing TensorBoard files from different machines.
Powerful tables: Search, filter, sort, and group results from different models. It’s easy to look over thousands of model versions and find the best performing models for different tasks. TensorBoard isn’t built to work well on large projects.
Tools for collaboration: Use W&B to organize complex machine learning projects. It’s easy to share a link to W&B, and you can use private teams to have everyone sending results to a shared project. We also support collaboration via reports— add interactive visualizations and describe your work in markdown. This is a great way to keep a work log, share findings with your supervisor, or present findings to your lab or team.
Customizing Training Loops in TensorFlow 2 - Article | Dashboard
6.43 - W&B for Julia
How to integrate W&B with Julia.
For those running machine learning experiments in the Julia programming language, a community contributor has created an unofficial set of Julia bindings called wandb.jl that you can use.
You can find examples in the documentation on the wandb.jl repository. Their “Getting Started” example is here:
using Wandb, Dates, Logging
# Start a new run, tracking hyperparameters in configlg = WandbLogger(project ="Wandb.jl",
name ="wandbjl-demo-$(now())",
config =Dict("learning_rate"=>0.01,
"dropout"=>0.2,
"architecture"=>"CNN",
"dataset"=>"CIFAR-100"))
# Use LoggingExtras.jl to log to multiple loggers togetherglobal_logger(lg)
# Simulating the training or evaluation loopfor x ∈1:50 acc = log(1+ x + rand() * get_config(lg, "learning_rate") + rand() + get_config(lg, "dropout"))
loss =10- log(1+ x + rand() + x * get_config(lg, "learning_rate") + rand() + get_config(lg, "dropout"))
# Log metrics from your script to W&B@info"metrics" accuracy=acc loss=loss
end# Finish the runclose(lg)
The wandb library has a WandbCallback callback for logging metrics, configs and saved boosters from training with XGBoost. Here you can see a live Weights & Biases dashboard with outputs from the XGBoost WandbCallback.
Get started
Logging XGBoost metrics, configs and booster models to Weights & Biases is as easy as passing the WandbCallback to XGBoost:
from wandb.integration.xgboost import WandbCallback
import xgboost as XGBClassifier
...# Start a wandb runrun = wandb.init()
# Pass WandbCallback to the modelbst = XGBClassifier()
bst.fit(X_train, y_train, callbacks=[WandbCallback(log_model=True)])
# Close your wandb runrun.finish()
You can open this notebook for a comprehensive look at logging with XGBoost and Weights & Biases
WandbCallback reference
Functionality
Passing WandbCallback to a XGBoost model will:
log the booster model configuration to Weights & Biases
log evaluation metrics collected by XGBoost, such as rmse, accuracy etc to Weights & Biases
log training metrics collected by XGBoost (if you provide data to eval_set)
log the best score and the best iteration
save and upload your trained model to Weights & Biases Artifacts (when log_model = True)
log feature importance plot when log_feature_importance=True (default).
Capture the best eval metric in wandb.summary when define_metric=True (default).
Arguments
log_model: (boolean) if True save and upload the model to Weights & Biases Artifacts
log_feature_importance: (boolean) if True log a feature importance bar plot
importance_type: (str) one of {weight, gain, cover, total_gain, total_cover} for tree model. weight for linear model.
define_metric: (boolean) if True (default) capture model performance at the best step, instead of the last step, of training in your wandb.summary.
Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes Sweeps, a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments.
Ultralytics’ YOLOv5 (“You Only Look Once”) model family enables real-time object detection with convolutional neural networks without all the agonizing pain.
Weights & Biases is directly integrated into YOLOv5, providing experiment metric tracking, model and dataset versioning, rich model prediction visualization, and more. It’s as easy as running a single pip install before you run your YOLO experiments.
All W&B logging features are compatible with data-parallel multi-GPU training, such as with PyTorch DDP.
Track core experiments
Simply by installing wandb, you’ll activate the built-in W&B logging features: system metrics, model metrics, and media logged to interactive Dashboards.
pip install wandb
git clone https://github.com/ultralytics/yolov5.git
python yolov5/train.py # train a small network on a small dataset
Just follow the links printed to the standard out by wandb.
Customize the integration
By passing a few simple command line arguments to YOLO, you can take advantage of even more W&B features.
Passing a number to --save_period will turn on model versioning. At the end of every save_period epochs, the model weights will be saved to W&B. The best-performing model on the validation set will be tagged automatically.
Turning on the --upload_dataset flag will also upload the dataset for data versioning.
Passing a number to --bbox_interval will turn on data visualization. At the end of every bbox_interval epochs, the outputs of the model on the validation set will be uploaded to W&B.
Ultralytics is the home for cutting-edge, state-of-the-art computer vision models for tasks like image classification, object detection, image segmentation, and pose estimation. Not only it hosts YOLOv8, the latest iteration in the YOLO series of real-time object detection models, but other powerful computer vision models such as SAM (Segment Anything Model), RT-DETR, YOLO-NAS, etc. Besides providing implementations of these models, Ultralytics also provides us with out-of-the-box workflows for training, fine-tuning, and applying these models using an easy-to-use API.
The development team has tested the integration with ultralyticsv8.0.238 and below. To report any issues with the integration, create a GitHub issue with the tag yolov8.
Track experiments and visualize validation results
This section demonstrates a typical workflow of using an Ultralytics model for training, fine-tuning, and validation and performing experiment tracking, model-checkpointing, and visualization of the model’s performance using W&B.
To use the W&B integration with Ultralytics, import the wandb.integration.ultralytics.add_wandb_callback function.
import wandb
from wandb.integration.ultralytics import add_wandb_callback
from ultralytics import YOLO
Initialize the YOLO model of your choice, and invoke the add_wandb_callback function on it before performing inference with the model. This ensures that when you perform training, fine-tuning, validation, or inference, it automatically saves the experiment logs and the images, overlaid with both ground-truth and the respective prediction results using the interactive overlays for computer vision tasks on W&B along with additional insights in a wandb.Table.
# Initialize YOLO Modelmodel = YOLO("yolov8n.pt")
# Add W&B callback for Ultralyticsadd_wandb_callback(model, enable_model_checkpointing=True)
# Train/fine-tune your model# At the end of each epoch, predictions on validation batches are logged# to a W&B table with insightful and interactive overlays for# computer vision tasksmodel.train(project="ultralytics", data="coco128.yaml", epochs=5, imgsz=640)
# Finish the W&B runwandb.finish()
Here’s how experiments tracked using W&B for an Ultralytics training or fine-tuning workflow looks like:
In order to use the W&B integration with Ultralytics, we need to import the wandb.integration.ultralytics.add_wandb_callback function.
import wandb
from wandb.integration.ultralytics import add_wandb_callback
from ultralytics.engine.model import YOLO
Download a few images to test the integration on. You can use still images, videos, or camera sources. For more information on inference sources, check out the Ultralytics docs.
Next, initialize your desired YOLO model and invoke the add_wandb_callback function on it before you perform inference with the model. This ensures that when you perform inference, it automatically logs the images overlaid with your interactive overlays for computer vision tasks along with additional insights in a wandb.Table.
# Initialize YOLO Modelmodel = YOLO("yolov8n.pt")
# Add W&B callback for Ultralyticsadd_wandb_callback(model, enable_model_checkpointing=True)
# Perform prediction which automatically logs to a W&B Table# with interactive overlays for bounding boxes, segmentation masksmodel(
[
"./assets/img1.jpeg",
"./assets/img3.png",
"./assets/img4.jpeg",
"./assets/img5.jpeg",
]
)
# Finish the W&B runwandb.finish()
You do not need to explicitly initialize a run using wandb.init() in case of a training or fine-tuning workflow. However, if the code involves only prediction, you must explicitly create a run.
YOLOX is an anchor-free version of YOLO with strong performance for object detection. You can use the YOLOX Weights & Biases integration to turn on logging of metrics related to training, validation, and the system, and you can interactively validate predictions with a single command-line argument.
Get started
To use YOLOX with Weights & Biases you will first need to sign up for a Weights & Biases account here.
Then just use the --logger wandb command line argument to turn on logging with wandb. Optionally you can also pass all of the arguments that wandb.init would expect, just prepend wandb- to the start of each argument
num_eval_imges controls the number of validation set images and predictions that are logged to Weights & Biases tables for model evaluation.
# login to wandbwandb login
# call your yolox training script with the `wandb` logger argumentpython tools/train.py .... --logger wandb \
wandb-project <project-name> \
wandb-entity <entity>
wandb-name <run-name> \
wandb-id <run-id> \
wandb-save_dir <save-dir> \
wandb-num_eval_imges <num-images> \
wandb-log_checkpoints <bool>