Run Page
Each training run of your model gets a dedicated page, organized within the larger project
Use the run page to explore detailed information about a single version of your model.

Overview Tab

    Run name, description, and tags
    Run state
      finished: script ended and fully synced data, or called wandb.finish()
      failed: script ended with a non-zero exit status
      crashed: script stopped sending heartbeats in the internal process, which can happen if the machine crashes
      running: script is still running and has recently sent a heartbeat
    Host name, operating system, Python version, and command that launched the run
    List of config parameters saved with wandb.config
    List of summary parameters saved with wandb.log(), by default set to the last value logged
W&B Dashboard run overview tab
The Python details are private, even if you make the page itself public. Here is an example of my run page in incognito on the left and my account on the right.

Charts Tab

    Search, group, and arrange visualizations
    Click the pencil icon ✏️ on a graph to edit
      change x-axis, metrics, and ranges
      edit legends, titles, and colors of charts
    View examples predictions from your validation set
    To get these charts, log data with wandb.log()

System Tab

    Visualize CPU utilization, system memory, disk I/O, network traffic, GPU utilization, GPU temperature, GPU time spent accessing memory, GPU memory allocated, and GPU power usage
    Lambda Labs highlighted how to use W&B system metrics in a blog post →

Model Tab

    See the layers of your model, the number of parameters, and the output shape of each layer

Logs Tab

    Output printed on the command line, the stdout and stderr from the machine training the model
    We show the last 1000 lines. After the run has finished, if you'd like to download the full log file, click the download button in the upper right corner.

Files Tab

The W&B Artifacts system adds extra features for handling, versioning, and deduplicating large files like datasets and models. We recommend you use Artifacts for tracking inputs and outputs of runs, rather than Check out the Artifacts quickstart here.

Artifacts Tab

    Provides a searchable list of the input and output Artifacts for this run
    Click a row to see information about a particular artifact used or produced by this run
    See the reference for the project-level Artifacts Tab for more on navigating and using the artifacts viewers in the web app View a live example →
Last modified 2mo ago