Documentation
Search…
Examples
How to use Weights & Biases: snippets, scripts, interactive notebooks, and videos.
Get an overview of what's possible with Weights & Biases via the three sections below:

Examples by Data Type

CV 🕶
NLP 📚
Tabular 🔢
Audio 🔊

Computer Vision ❤️ W&B

Track your experiments, log your Images or Video, analyze your models predictions and optimize your hyperparameters.

Easily track hyperparameters and log metrics

Everytime you run your code, it's captured and visualized in W&B.
1
wandb.init(project='my-resnet', config={'lr': 0.01, ...})
2
wandb.log({'loss': loss, ...})
Copied!

Log Images

Look at individual images and predictions to better understand your models.
1
image = wandb.Image(array_or_path, caption="Input image")
2
wandb.log({"examples": image})
Copied!

Log Videos

1
# axes are (time, channel, height, width)
2
frames = np.random.randint(low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8)
3
wandb.log({"video": wandb.Video(frames, fps=4)})
Copied!

Log Segmentation Masks

1
mask_data = np.array([[1, 2, 2, ... , 2, 2, 1], ...])
2
class_labels = {
3
1: "tree",
4
2: "car",
5
3: "road"
6
}
7
mask_img = wandb.Image(image, masks={
8
"predictions": {
9
"mask_data": mask_data,
10
"class_labels": class_labels
11
}
12
})
Copied!
Interactive mask viewing

Log Bounding Boxes

1
class_id_to_label = {
2
1: "car",
3
2: "road",
4
3: "building"
5
}
6
img = wandb.Image(image, boxes={
7
"predictions": {
8
"box_data": [{
9
"position": {
10
"minX": 0.1,
11
"maxX": 0.2,
12
"minY": 0.3,
13
"maxY": 0.4
14
},
15
"class_id" : 2,
16
"box_caption": class_id_to_label[2],
17
"scores" : {
18
"acc": 0.1,
19
"loss": 1.2
20
},
21
}
22
],
23
"class_labels": class_id_to_label
24
}
25
})
26
wandb.log({"driving_scene": img})
Copied!
Interactive bounding box viewing.
Read more: Log Media & Objects

Log Tables of predictions

Use W&B Tables to interact with your model predictions. Dynamically show your models incorrect predictions, most confusing classes or difficult corner cases.
Grouped predictions using W&B Tables
1
# Define the names of the columns in your Table
2
column_names = ["image_id", "image", "label", "prediction"]
3
4
# Prepare your data, row-wise
5
# You can log filepaths or image tensors with wandb.Image
6
my_data = [
7
['img_0.jpg', wandb.Image("data/images/img_0.jpg"), 0, 0],
8
['img_1.jpg', wandb.Image("data/images/img_1.jpg"), 8, 0],
9
['img_2.jpg', wandb.Image("data/images/img_2.jpg"), 7, 1],
10
['img_3.jpg', wandb.Image("data/images/img_3.jpg"), 1, 1]
11
]
12
13
# Create your W&B Table
14
val_table = wandb.Table(data=my_data, columns=column_names)
15
16
# Log the Table to W&B
17
wandb.log({'my_val_table': val_table})
Copied!

Integrations

Whats Next?

NLP ❤️ W&B

It's easy to integrate W&B into your NLP projects. Make your work more reproducible, visible and debuggable.

Track your experiments metrics and hyperparameters

Everytime you run your code, it's captured and visualized in W&B.
1
wandb.init(project='my-transformer', config={'lr': 0.01, ...})
2
wandb.log({'accuracy': accuracy, ...})
Copied!

Log text, custom HTML and displacy visualizations

Log text, custom HTML or even displacy visualizations within W&B Tables . Combine your text data with prediction outputs of your model for model evaluation. You can then dynamically filter, sort or group using the UI to drill down into your model performance.
1
# Your data
2
headlines = ['Square(SQ) Surpasses Q4...', ...]
3
4
# 1️⃣ Create the W&B Table
5
text_table = wandb.Table(columns=["Headline", "Positive", "Negative", "Neutral"])
6
for headline in headlines:
7
pos_score, neg_score, neutral_score = model(headline)
8
# 2️⃣ Add the data
9
text_table.add_data(headline, pos_score, neg_score, neutral_score)
10
11
# 3️⃣ Log the Table to wandb
12
wandb.log({"validation_samples" : text_table})
Copied!
Text and model scores in a W&B Table

Integrations

Whats Next?

Tabular ❤️ W&B

Weights & Biases supports logging pandas dataframes, iterative modelling with traditional ML and has integrations with Scikit-Learn, XGBoost, LightGBM, CatBoost and PyCaret.

Track your experiments

Everytime you run your code, it's captured and visualized in W&B.
1
wandb.init(project='my-xgb', config={'lr': 0.01, ...})
2
wandb.log({'loss': loss, ...})
Copied!

Log and explore your data

Log a Pandas Dataframe to associate it with a particular experiment, or to interactively explore it in W&B Tables in the workspace.
1
# Create a W&B Table with your pandas dataframe
2
table = wandb.Table(my_df)
3
4
# Log the Table to your W&B workspace
5
wandb.log({'dataframe_in_table': table})
Copied!

Integrations

Whats Next?

Audio ❤️ W&B

Weights & Biases supports logging audio data arrays or file that can be played back in W&B

Track your experiments

Everytime you run your code, it's captured and visualized in W&B.
1
wandb.init(project='my-bird-calls', config={'lr': 0.01, ...})
2
wandb.log({'loss': loss, ...})
Copied!

Log audio files or arrays

You can log audio files and data arrays with wandb.Audio()
1
# Log an audio array or file
2
wandb.log({"my whale song": wandb.Audio(
3
array_or_path, caption="montery whale 0034", sample_rate=32)})
4
5
# OR
6
7
# Log your audio as part of a W&B Table
8
my_table = wandb.Table(columns=["audio", "spectrogram", "bird_class", "prediction"])
9
for (audio_arr, spec, label) in my_data:
10
pred = model(audio)
11
12
# Add the data to a W&B Table
13
audio = wandb.Audio(audio_arr, sample_rate=32)
14
img = wandb.Image(spec)
15
my_table.add_data(audio, img, label, pred)
16
17
# Log the Table to wandb
18
wandb.log({"validation_samples" : my_table})
Copied!

Integrations

Whats Next?

Examples by ML Library

Weights & Biases works natively with PyTorch, Tensorflow and Jax and also has logging integrations in all of the popular open source machine learning libraries, including the ones below as well as SpaCy, XGBoost, LightGBM, SciKit-Learn, YOLOv5, Fastai and more.
📉 TensorBoard
⚡ PyTorch Lightning
🟥 Keras
🤗 Transformers
W&B supports TensorBoard to automatically log all the metrics from your script into our dashboards with just 2 lines:
1
import wandb
2
# Add `sync_tensorboard=True` when you start a W&B run
3
wandb.init(project='my-project', sync_tensorboard=True)
4
5
# Your Keras, TensorFlow or PyTorch code using TensorBoard
6
...
7
8
# Call wandb.finish() to upload your TensorBoard logs to W&B
9
wandb.finish()
Copied!
With the WandbLogger in PyTorch Lightning you can log your metrics, model checkpoints, media and more!
1
from pytorch_lightning.loggers import WandbLogger
2
from pytorch_lightning import Trainer
3
4
# Add the WandbLogger to your PyTorch Lightning Trainer
5
trainer = Trainer(logger=WandbLogger())
Copied!
With our Keras WandbCallback you can log your metrics, model checkpoints, media and more!
1
import wandb
2
from wandb.keras import WandbCallback
3
4
# Initialise a W&B run
5
wandb.init(config={"hyper": "parameter"})
6
7
...
8
9
# Add the WandbCallback to your Keras callbacks
10
model.fit(X_train, y_train, validation_data=(X_test, y_test),
11
callbacks=[WandbCallback()])
Copied!
With the W&B integration in Hugging Face Transformers' Trainer you can log your metrics, model checkpoints, run sweeps and more!
1
from transformers import TrainingArguments, Trainer
2
3
# Add `report_to="wandb"` in your TrainingArguments to start logging to W&B
4
args = TrainingArguments(... , report_to="wandb")
5
trainer = Trainer(... , args=args)
Copied!

Examples by Application

Point Clouds
Segmentation
Bounding Boxes
3D from Video
Deep Drive
See LIDAR point cloud visualizations from the Lyft dataset. These are interactive and have bounding box annotations. Click the full screen button in the corner of an image, then zoom, rotate, and pan around the 3D scene.
This report describes how to log and interact with image masks for semantic segmentation.
Examples & walkthrough of how to annotate driving scenes for object detection
Infer depth perception from dashboard camera videos. This example contains lots of sample images from road scenes, and shows how to use the media panel for visualizing data in W&B.
This report compares models for detecting humans in scenes from roads, with lots of charts, images, and notes. The project page workspace is also available.

Biomedical

2D Molecules
3D Molecules
X Rays
RDKit
This report explores training models to predict how soluble a molecule is in water based on its chemical formula. This example features scikit learn and sweeps.
This report explores molecular binding and shows interactive 3D protein visualizations.
This report explores chest x-ray data and strategies for handling real world long-tailed data.
This report explores rdkit feature for logging molecular data.
Click here to view and interact with a live W&B Dashboard built with this notebook.

Finance

Credit Scorecards
Track experiments, generate credit scorecard for loan defaults and run a hyperparameter sweep to find the best hyperparameters. Click here to view and interact with a live W&B Dashboard built with this notebook.
Last modified 30d ago