Run Page

Each training run of your model gets a dedicated page, organized within the larger project

Use the run page to explore detailed information about a single version of your model.

Overview Tab

  • Run name, description, and tags

  • Run state

    • finished: script ended and fully synced data, or called wandb.finish()

    • failed: script ended with a non-zero exit status

    • crashed: script stopped sending heartbeats in the internal process, which can happen if the machine crashes

    • running: script is still running and has recently sent a heartbeat

  • Host name, operating system, Python version, and command that launched the run

  • List of config parameters saved with wandb.config

  • List of summary parameters saved with wandb.log(), by default set to the last value logged

View a live example →

W&B Dashboard run overview tab

The Python details are private, even if you make the page itself public. Here is an example of my run page in incognito on the left and my account on the right.

Charts Tab

  • Search, group, and arrange visualizations

  • Click the pencil icon ✏️ on a graph to edit

    • change x-axis, metrics, and ranges

    • edit legends, titles, and colors of charts

  • View examples predictions from your validation set

  • To get these charts, log data with wandb.log()

View a live example →

System Tab

  • Visualize CPU utilization, system memory, disk I/O, network traffic, GPU utilization, GPU temperature, GPU time spent accessing memory, GPU memory allocated, and GPU power usage

  • Lambda Labs highlighted how to use W&B system metrics in a blog post →

View a live example →

Model Tab

  • See the layers of your model, the number of parameters, and the output shape of each layer

View a live example →

Logs Tab

  • Output printed on the command line, the stdout and stderr from the machine training the model

  • We show the last 1000 lines. After the run has finished, if you'd like to download the full log file, click the download button in the upper right corner.

View a live example →

Files Tab

  • Save files to sync with the run using

  • Keep model checkpoints, validation set examples, and more

  • Use the diff.patch to restore the exact version of your code

🌟New recommendation: Try Artifacts for tracking inputs and outputs

View a live example →