Documentation
Search…
Log Media & Objects
Log rich media, from 3D point clouds and molecules to HTML and histograms
We support images, video, audio, and more. Log rich media to explore your results and visually compare your runs, models, and datasets. Read on for examples and how-to guides.
Looking for reference docs for our media types? You want this page.
You can see working code to log all of these media objects in this Colab Notebook, check out what the results look like on wandb.ai here, and follow along with a video tutorial, linked above.

Images

Log images to track inputs, outputs, filter weights, activations, and more!
Inputs and outputs of an autoencoder network performing in-painting.
Images can be logged directly from numpy arrays, as PIL images, or from the filesystem.
It's recommended to log fewer than 50 images per step to prevent logging from becoming a bottleneck during training and image loading from becoming a bottleneck when viewing results.
Logging Arrays as Images
Logging PIL Images
Logging Images from FIies
Provide arrays directly when constructing images manually, e.g. using make_grid from torchvision.
Arrays are converted to png using Pillow.
1
images = wandb.Image(image_array, caption="Top: Output, Bottom: Input")
2
3
wandb.log({"examples": images}
Copied!
We assume the image is gray scale if the last dimension is 1, RGB if it's 3, and RGBA if it's 4. If the array contains floats, we convert them to integers between 0 and 255. If you want to normalize your images differently, you can specify the mode manually or just supply a PIL.Image, as described in the "Logging PIL Images" tab of this panel.
For full control over the conversion of arrays to images, construct the PIL.Image yourself and provide it directly.
1
images = [PIL.Image.fromarray(image) for image in image_array]
2
3
wandb.log({"examples": [wandb.Image(image) for image in images]}
Copied!
For even more control, create images however you like, save them to disk, and provide a filepath.
1
im = PIL.fromarray(...)
2
rgb_im = im.convert('RGB')
3
rgb_im.save('myimage.jpg')
4
5
wandb.log({"example": wandb.Image("myimage.jpg")})
Copied!

Image Overlays

Segmentation Masks
Bounding Boxes
Log semantic segmentation masks and interact with them (altering opacity, viewing changes over time, and more) via the W&B UI.
Interactive mask viewing in the W&B UI.
To log an overlay, you'll need to provide a dictionary with the following keys and values to the masks keyword argument of wandb.Image:
    one of two keys representing the image mask:
      "mask_data": a 2D numpy array containing an integer class label for each pixel
      "path": (string) a path to a saved image mask file
    "class_labels": (optional) a dictionary mapping the integer class labels in the image mask to their readable class names
To log multiple masks, log a mask dictionary with multiple keys, as in the code snippet below.
1
mask_data = np.array([[1, 2, 2, ... , 2, 2, 1], ...])
2
3
class_labels = {
4
1: "tree",
5
2: "car",
6
3: "road"
7
}
8
9
mask_img = wandb.Image(image, masks={
10
"predictions": {
11
"mask_data": mask_data,
12
"class_labels": class_labels
13
},
14
"ground_truth": {
15
...
16
},
17
...
18
})
Copied!
Log bounding boxes with images, and use filters and toggles to dynamically visualize different sets of boxes in the UI.
To log a bounding box, you'll need to provide a dictionary with the following keys and values to the boxes keyword argument of wandb.Image:
    box_data: a list of dictionaries, one for each box. The box dictionary format is described below.
      position: a dictionary representing the position and size of the box in one of two formats, as described below. Boxes need not all use the same format.
        Option 1: {"minX", "maxX", "minY", "maxY"}. Provide a set of coordinates defining the upper and lower bounds of each box dimension.
        Option 2: {"middle", "width", "height"}. Provide a set of coordinates specifying the middle coordinates as [x,y], and width and height as scalars.
      class_id: an integer representing the class identity of the box. See class_labels key below.
      scores: a dictionary of string labels and numeric values for scores. Can be used for filtering boxes in the UI.
      domain: specify the units/format of the box coordinates. Set this to "pixel" if the box coordinates are expressed in pixel space (i.e. as integers within the bounds of the image dimensions). By default, the domain is assumed to be a fraction/percentage of the image (a floating point number between 0 and 1).
      box_caption: (optional) a string to be displayed as the label text on this box
    class_labels: (optional) A dictionary mapping class_ids to strings. By default we will generate class labels class_0, class_1, etc.
Check out this example:
1
class_id_to_label = {
2
1: "car",
3
2: "road",
4
3: "building",
5
....
6
}
7
8
img = wandb.Image(image, boxes={
9
"predictions": {
10
"box_data": [{
11
# one box expressed in the default relative/fractional domain
12
"position": {
13
"minX": 0.1,
14
"maxX": 0.2,
15
"minY": 0.3,
16
"maxY": 0.4
17
},
18
"class_id" : 2,
19
"box_caption": class_id_to_label[2],
20
"scores" : {
21
"acc": 0.1,
22
"loss": 1.2
23
},
24
# another box expressed in the pixel domain
25
# (for illustration purposes only, all boxes are likely
26
# to be in the same domain/format)
27
"position": {
28
"middle": [150, 20],
29
"width": 68,
30
"height": 112
31
},
32
"domain" : "pixel",
33
"class_id" : 3,
34
"box_caption": "a building",
35
"scores" : {
36
"acc": 0.5,
37
"loss": 0.7
38
},
39
...
40
# Log as many boxes an as needed
41
}
42
],
43
"class_labels": class_id_to_label
44
},
45
# Log each meaningful group of boxes with a unique key name
46
"ground_truth": {
47
...
48
}
49
})
50
51
wandb.log({"driving_scene": img})
Copied!

Histograms

Basic Histogram Logging
Flexible Histogram Logging
Histograms in Summary
If a sequence of numbers (e.g. list, array, tensor) is provided as the first argument, we will construct the histogram automatically by calling np.histogram. Note that all arrays/tensors are flattened. You can use the optional num_bins keyword argument to override the default of 64 bins. The maximum number of bins supported is 512.
In the UI, histograms are plotted with the training step on the x-axis, the metric value on the y-axis, and the count represented by color, to ease comparison of histograms logged throughout training. See the "Histograms in Summary" tab of this panel for details on logging one-off histograms.
1
wandb.log({"gradients": wandb.Histogram(grads)})
Copied!
Gradients for the discriminator in a GAN.
If you want more control, call np.histogram and pass the returned tuple to the np_histogram keyword argument.
1
np_hist_grads = np.histogram(grads, density=True, range=(0., 1.))
2
wandb.log({"gradients": wandb.Histogram(np_hist_grads)})
Copied!
1
wandb.run.summary.update( # if only in summary, only visible on overview tab
2
{"final_logits": wandb.Histogram(logits)})
Copied!
If histograms are in your summary they will appear on the Overview tab of the Run Page. If they are in your history, we plot a heatmap of bins over time on the Charts tab.

3D Visualizations

3D Object
Point Clouds
Molecules
Log files in the formats 'obj', 'gltf', 'glb', 'babylon', 'stl', 'pts.json', and we will render them in the UI when your run finishes.
1
wandb.log({"generated_samples":
2
[wandb.Object3D(open("sample.obj")),
3
wandb.Object3D(open("sample.gltf")),
4
wandb.Object3D(open("sample.glb"))]})
Copied!
Ground truth and prediction of a headphones point cloud
Log 3D point clouds and Lidar scenes with bounding boxes. Pass in a numpy array containing coordinates and colors for the points to render. In the UI, we truncate to 300,000 points.
1
point_cloud = np.array([[0, 0, 0, COLOR...], ...])
2
3
wandb.log({"point_cloud": wandb.Object3D(point_cloud)})
Copied!
Three different shapes of numpy arrays are supported for flexible color schemes.
    [[x, y, z], ...] nx3
    [[x, y, z, c], ...] nx4 | c is a category in the range [1, 14] (Useful for segmentation)
    [[x, y, z, r, g, b], ...] nx6 | r,g,b are values in the range [0,255]for red, green, and blue color channels.
Here's an example of logging code below:
    pointsis a numpy array with the same format as the simple point cloud renderer shown above.
    boxes is a numpy array of python dictionaries with three attributes:
      corners- a list of eight corners
      label- a string representing the label to be rendered on the box (Optional)
      color- rgb values representing the color of the box
    type is a string representing the scene type to render. Currently the only supported value is lidar/beta
1
# Log points and boxes in W&B
2
point_scene = wandb.Object3D({
3
"type": "lidar/beta",
4
"points": np.array( # add points, as in a point cloud
5
[
6
[0.4, 1, 1.3],
7
[1, 1, 1],
8
[1.2, 1, 1.2]
9
]
10
),
11
"boxes": np.array( # draw 3d boxes
12
[
13
{
14
"corners": [
15
[0,0,0],
16
[0,1,0],
17
[0,0,1],
18
[1,0,0],
19
[1,1,0],
20
[0,1,1],
21
[1,0,1],
22
[1,1,1]
23
],
24
"label": "Box",
25
"color": [123, 321, 111],
26
},
27
{
28
"corners": [
29
[0,0,0],
30
[0,2,0],
31
[0,0,2],
32
[2,0,0],
33
[2,2,0],
34
[0,2,2],
35
[2,0,2],
36
[2,2,2]
37
],
38
"label": "Box-2",
39
"color": [111, 321, 0],
40
}
41
]
42
),
43
"vectors": np.array( # add 3d vectors
44
[
45
{"start": [0, 0, 0], "end": [0.1, 0.2, 0.5]}
46
]
47
)
48
})
49
wandb.log({"point_scene": point_scene})
Copied!
1
wandb.log({"protein": wandb.Molecule(open("6lu7.pdb"))}
Copied!
Log molecular data in any of 10 file types:pdb, pqr, mmcif, mcif, cif, sdf, sd, gro, mol2, or mmtf.
When your run finishes, you'll be able to interact with 3D visualizations of your molecules in the UI.

Other Media

Weights & Biases also supports logging of a variety of other media types.
Audio
Video
Text
HTML
1
wandb.log(
2
{"whale songs": wandb.Audio(np_array, caption="OooOoo", sample_rate=32)})
Copied!
The maximum number of audio clips that can be logged per step is 100.
1
wandb.log(
2
{"video": wandb.Video(numpy_array_or_path_to_video, fps=4, format="gif")})
Copied!
If a numpy array is supplied we assume the dimensions are, in order: time, channels, width, height. By default we create a 4 fps gif image (ffmpeg and the moviepy python library are required when passing numpy objects). Supported formats are "gif", "mp4", "webm", and "ogg". If you pass a string to wandb.Video we assert the file exists and is a supported format before uploading to wandb. Passing a BytesIO object will create a tempfile with the specified format as the extension.
On the W&B Run and Project Pages, you will see your videos in the Media section.
Use wandb.Table to log text in tables to show up in the UI. By default, the column headers are ["Input", "Output", "Expected"]. To ensure optimal UI performance, the default maximum number of rows is set to 10,000. However, users can explicitly override the maximum with wandb.Table.MAX_ROWS = {DESIRED_MAX}.
1
columns = ["Text", "Predicted Sentiment", "True Sentiment"]
2
# Method 1
3
data = [["I love my phone", "1", "1"], ["My phone sucks", "0", "-1"]]
4
table = wandb.Table(data=data, columns=columns)
5
wandb.log({"examples": table})
6
7
# Method 2
8
table = wandb.Table(columns=columns)
9
table.add_data("I love my phone", "1", "1")
10
table.add_data("My phone sucks", "0", "-1")
11
wandb.log({"examples": table})
Copied!
You can also pass a pandas DataFrame object.
1
table = wandb.Table(dataframe=my_dataframe)
Copied!
1
wandb.log({"custom_file": wandb.Html(open("some.html"))})
2
wandb.log({"custom_string": wandb.Html('<a href="https://mysite">Link</a>')})
Copied!
Custom html can be logged at any key, and this exposes an HTML panel on the run page. By default we inject default styles, you can disable default styles by passing inject=False.
1
wandb.log({"custom_file": wandb.Html(open("some.html"), inject=False)})
Copied!

Frequently Asked Questions

How can I compare images or media across epochs or steps?

Each time you log images from a step, we save them to show in the UI. Expand the image panel, and use the step slider to look at images from different steps. This makes it easy to compare how a model's output changes during training.

What if I want to integrate W&B into my project, but I don't want to upload any images or media?

W&B can be used even for projects that only log scalars — you specify any files or data you'd like to upload explicitly. Here's a quick example in PyTorch that does not log images.

How do I log a PNG?

wandb.Image converts numpy arrays or instances of PILImage to PNGs by default.
1
wandb.log({"example": wandb.Image(...)})
2
# Or multiple images
3
wandb.log({"example": [wandb.Image(...) for img in images]})
Copied!

How do I log a video?

Videos are logged using the wandb.Video data type:
1
wandb.log({"example": wandb.Video("myvideo.mp4")})
Copied!
Now you can view videos in the media browser. Go to your project workspace, run workspace, or report and click "Add visualization" to add a rich media panel.

How do I navigate and zoom in point clouds?

You can hold control and use the mouse to move around inside the space.
Last modified 1mo ago