This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

W&B Models

W&B Models is the system of record for ML Practitioners who want to organize their models, boost productivity and collaboration, and deliver production ML at scale.

W&B Models architecture diagram

With W&B Models, you can:

Machine learning practitioners rely on W&B Models as their ML system of record to track and visualize experiments, manage model versions and lineage, and optimize hyperparameters.

1 - Experiments

Track machine learning experiments with W&B.

Track machine learning experiments with a few lines of code. You can then review the results in an interactive dashboard or export your data to Python for programmatic access using our Public API.

Utilize W&B Integrations if you use popular frameworks such as PyTorch, Keras, or Scikit. See our Integration guides for a full list of integrations and information on how to add W&B to your code.

Experiments dashboard

The image above shows an example dashboard where you can view and compare metrics across multiple runs.

How it works

Track a machine learning experiment with a few lines of code:

  1. Create a W&B Run.
  2. Store a dictionary of hyperparameters, such as learning rate or model type, into your configuration (wandb.Run.config).
  3. Log metrics (wandb.Run.log()) over time in a training loop, such as accuracy and loss.
  4. Save outputs of a run, like the model weights or a table of predictions.

The following code demonstrates a common W&B experiment tracking workflow:

# Start a run.
#
# When this block exits, it waits for logged data to finish uploading.
# If an exception is raised, the run is marked failed.
with wandb.init(entity="", project="my-project-name") as run:
  # Save mode inputs and hyperparameters.
  run.config.learning_rate = 0.01

  # Run your experiment code.
  for epoch in range(num_epochs):
    # Do some training...

    # Log metrics over time to visualize model performance.
    run.log({"loss": loss})

  # Upload model outputs as artifacts.
  run.log_artifact(model)

Get started

Depending on your use case, explore the following resources to get started with W&B Experiments:

  • Read the W&B Quickstart for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact.
  • Explore this chapter to learn how to:
    • Create an experiment
    • Configure experiments
    • Log data from experiments
    • View results from experiments
  • Explore the W&B Python Library within the W&B API Reference Guide.

Best practices and tips

For best practices and tips for experiments and logging, see Best Practices: Experiments and Logging.

1.1 - Create an experiment

Create a W&B Experiment.

Use the W&B Python SDK to track machine learning experiments. You can then review the results in an interactive dashboard or export your data to Python for programmatic access with the W&B Public API.

This guide describes how to use W&B building blocks to create a W&B Experiment.

How to create a W&B Experiment

Create a W&B Experiment in four steps:

  1. Initialize a W&B Run
  2. Capture a dictionary of hyperparameters
  3. Log metrics inside your training loop
  4. Log an artifact to W&B

Initialize a W&B run

Use wandb.init() to create a W&B Run.

The following snippet creates a run in a W&B project named “cat-classification” with the description “My first experiment” to help identify this run. Tags “baseline” and “paper1” are included to remind us that this run is a baseline experiment intended for a future paper publication.

import wandb

with wandb.init(
    project="cat-classification",
    notes="My first experiment",
    tags=["baseline", "paper1"],
) as run:
    ...

wandb.init() returns a Run object.

Capture a dictionary of hyperparameters

Save a dictionary of hyperparameters such as learning rate or model type. The model settings you capture in config are useful later to organize and query your results.

with wandb.init(
    ...,
    config={"epochs": 100, "learning_rate": 0.001, "batch_size": 128},
) as run:
    ...

For more information on how to configure an experiment, see Configure Experiments.

Log metrics inside your training loop

Call run.log() to log metrics about each training step such as accuracy and loss.

model, dataloader = get_model(), get_data()

for epoch in range(run.config.epochs):
    for batch in dataloader:
        loss, accuracy = model.training_step()
        run.log({"accuracy": accuracy, "loss": loss})

For more information on different data types you can log with W&B, see Log Data During Experiments.

Log an artifact to W&B

Optionally log a W&B Artifact. Artifacts make it easy to version datasets and models.

# You can save any file or even a directory. In this example, we pretend
# the model has a save() method that outputs an ONNX file.
model.save("path_to_model.onnx")
run.log_artifact("path_to_model.onnx", name="trained-model", type="model")

Learn more about Artifacts or about versioning models in Registry.

Putting it all together

The full script with the preceding code snippets is found below:

import wandb

with wandb.init(
    project="cat-classification",
    notes="",
    tags=["baseline", "paper1"],
    # Record the run's hyperparameters.
    config={"epochs": 100, "learning_rate": 0.001, "batch_size": 128},
) as run:
    # Set up model and data.
    model, dataloader = get_model(), get_data()

    # Run your training while logging metrics to visualize model performance.
    for epoch in range(run.config["epochs"]):
        for batch in dataloader:
            loss, accuracy = model.training_step()
            run.log({"accuracy": accuracy, "loss": loss})

    # Upload the trained model as an artifact.
    model.save("path_to_model.onnx")
    run.log_artifact("path_to_model.onnx", name="trained-model", type="model")

Next steps: Visualize your experiment

Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like parallel coordinates plots, parameter importance analyzes, and additional chart types.

Quickstart Sweeps Dashboard example

For more information on how to view experiments and specific runs, see Visualize results from experiments.

Best practices

The following are some suggested guidelines to consider when you create experiments:

  1. Finish your runs: Use wandb.init() in a with statement to automatically mark the run as finished when the code completes or raises an exception.
    • In Jupyter notebooks, it may be more convenient to manage the Run object yourself. In this case, you can explicitly call finish() on the Run object to mark it complete:

      # In a notebook cell:
      run = wandb.init()
      
      # In a different cell:
      run.finish()
      
  2. Config: Track hyperparameters, architecture, dataset, and anything else you’d like to use to reproduce your model. These will show up in columns— use config columns to group, sort, and filter runs dynamically in the app.
  3. Project: A project is a set of experiments you can compare together. Each project gets a dedicated dashboard page, and you can easily turn on and off different groups of runs to compare different model versions.
  4. Notes: Set a quick commit message directly from your script. Edit and access notes in the Overview section of a run in the W&B App.
  5. Tags: Identify baseline runs and favorite runs. You can filter runs using tags. You can edit tags at a later time on the Overview section of your project’s dashboard on the W&B App.
  6. Create multiple run sets to compare experiments: When comparing experiments, create multiple run sets to make metrics easy to compare. You can toggle run sets on or off on the same chart or group of charts.

The following code snippet demonstrates how to define a W&B Experiment using the best practices listed above:

import wandb

config = {
    "learning_rate": 0.01,
    "momentum": 0.2,
    "architecture": "CNN",
    "dataset_id": "cats-0192",
}

with wandb.init(
    project="detect-cats",
    notes="tweak baseline",
    tags=["baseline", "paper1"],
    config=config,
) as run:
    ...

For more information about available parameters when defining a W&B Experiment, see the wandb.init() API docs in the API Reference Guide.

1.2 - Configure experiments

Use a dictionary-like object to save your experiment configuration

Use the config property of a run to save your training configuration:

  • hyperparameter
  • input settings such as the dataset name or model type
  • any other independent variables for your experiments.

The wandb.Run.config property makes it easy to analyze your experiments and reproduce your work in the future. You can group by configuration values in the W&B App, compare the configurations of different W&B runs, and evaluate how each training configuration affects the output. The config property is a dictionary-like object that can be composed from multiple dictionary-like objects.

Set up an experiment configuration

Configurations are typically defined in the beginning of a training script. Machine learning workflows may vary, however, so you are not required to define a configuration at the beginning of your training script.

Use dashes (-) or underscores (_) instead of periods (.) in your config variable names.

Use the dictionary access syntax ["key"]["value"] instead of the attribute access syntax config.key.value if your script accesses wandb.Run.config keys below the root.

The following sections outline different common scenarios of how to define your experiments configuration.

Set the configuration at initialization

Pass a dictionary at the beginning of your script when you call the wandb.init() API to generate a background process to sync and log data as a W&B Run.

The proceeding code snippet demonstrates how to define a Python dictionary with configuration values and how to pass that dictionary as an argument when you initialize a W&B Run.

import wandb

# Define a config dictionary object
config = {
    "hidden_layer_sizes": [32, 64],
    "kernel_sizes": [3],
    "activation": "ReLU",
    "pool_sizes": [2],
    "dropout": 0.5,
    "num_classes": 10,
}

# Pass the config dictionary when you initialize W&B
with wandb.init(project="config_example", config=config) as run:
    ...

If you pass a nested dictionary as the config, W&B flattens the names using dots.

Access the values from the dictionary similarly to how you access other dictionaries in Python:

# Access values with the key as the index value
hidden_layer_sizes = run.config["hidden_layer_sizes"]
kernel_sizes = run.config["kernel_sizes"]
activation = run.config["activation"]

# Python dictionary get() method
hidden_layer_sizes = run.config.get("hidden_layer_sizes")
kernel_sizes = run.config.get("kernel_sizes")
activation = run.config.get("activation")

Set the configuration with argparse

You can set your configuration with an argparse object. argparse, short for argument parser, is a standard library module in Python 3.2 and above that makes it easy to write scripts that take advantage of all the flexibility and power of command line arguments.

This is useful for tracking results from scripts that are launched from the command line.

The proceeding Python script demonstrates how to define a parser object to define and set your experiment config. The functions train_one_epoch and evaluate_one_epoch are provided to simulate a training loop for the purpose of this demonstration:

# config_experiment.py
import argparse
import random

import numpy as np
import wandb


# Training and evaluation demo code
def train_one_epoch(epoch, lr, bs):
    acc = 0.25 + ((epoch / 30) + (random.random() / 10))
    loss = 0.2 + (1 - ((epoch - 1) / 10 + random.random() / 5))
    return acc, loss


def evaluate_one_epoch(epoch):
    acc = 0.1 + ((epoch / 20) + (random.random() / 10))
    loss = 0.25 + (1 - ((epoch - 1) / 10 + random.random() / 6))
    return acc, loss


def main(args):
    # Start a W&B Run
    with wandb.init(project="config_example", config=args) as run:
        # Access values from config dictionary and store them
        # into variables for readability
        lr = run.config["learning_rate"]
        bs = run.config["batch_size"]
        epochs = run.config["epochs"]

        # Simulate training and logging values to W&B
        for epoch in np.arange(1, epochs):
            train_acc, train_loss = train_one_epoch(epoch, lr, bs)
            val_acc, val_loss = evaluate_one_epoch(epoch)

            run.log(
                {
                    "epoch": epoch,
                    "train_acc": train_acc,
                    "train_loss": train_loss,
                    "val_acc": val_acc,
                    "val_loss": val_loss,
                }
            )


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        formatter_class=argparse.ArgumentDefaultsHelpFormatter
    )

    parser.add_argument("-b", "--batch_size", type=int, default=32, help="Batch size")
    parser.add_argument(
        "-e", "--epochs", type=int, default=50, help="Number of training epochs"
    )
    parser.add_argument(
        "-lr", "--learning_rate", type=int, default=0.001, help="Learning rate"
    )

    args = parser.parse_args()
    main(args)

Set the configuration throughout your script

You can add more parameters to your config object throughout your script. The proceeding code snippet demonstrates how to add new key-value pairs to your config object:

import wandb

# Define a config dictionary object
config = {
    "hidden_layer_sizes": [32, 64],
    "kernel_sizes": [3],
    "activation": "ReLU",
    "pool_sizes": [2],
    "dropout": 0.5,
    "num_classes": 10,
}

# Pass the config dictionary when you initialize W&B
with wandb.init(project="config_example", config=config) as run:
    # Update config after you initialize W&B
    run.config["dropout"] = 0.2
    run.config.epochs = 4
    run.config["batch_size"] = 32

You can update multiple values at a time:

run.config.update({"lr": 0.1, "channels": 16})

Set the configuration after your Run has finished

Use the W&B Public API to update a completed run’s config.

You must provide the API with your entity, project name and the run’s ID. You can find these details in the Run object or in the W&B App:

with wandb.init() as run:
    ...

# Find the following values from the Run object if it was initiated from the
# current script or notebook, or you can copy them from the W&B App UI.
username = run.entity
project = run.project
run_id = run.id

# Note that api.run() returns a different type of object than wandb.init().
api = wandb.Api()
api_run = api.run(f"{username}/{project}/{run_id}")
api_run.config["bar"] = 32
api_run.update()

absl.FLAGS

You can also pass in absl flags.

flags.DEFINE_string("model", None, "model to run")  # name, default, help

run.config.update(flags.FLAGS)  # adds absl flags to config

File-Based Configs

If you place a file named config-defaults.yaml in the same directory as your run script, the run automatically picks up the key-value pairs defined in the file and passes them to wandb.Run.config.

The following code snippet shows a sample config-defaults.yaml YAML file:

batch_size:
  desc: Size of each mini-batch
  value: 32

You can override the default values automatically loaded from config-defaults.yaml by setting updated values in the config argument of wandb.init. For example:

import wandb

# Override config-defaults.yaml by passing custom values
with wandb.init(config={"epochs": 200, "batch_size": 64}) as run:
    ...

To load a configuration file other than config-defaults.yaml, use the --configs command-line argument and specify the path to the file:

python train.py --configs other-config.yaml

Example use case for file-based configs

Suppose you have a YAML file with some metadata for the run, and then a dictionary of hyperparameters in your Python script. You can save both in the nested config object:

hyperparameter_defaults = dict(
    dropout=0.5,
    batch_size=100,
    learning_rate=0.001,
)

config_dictionary = dict(
    yaml=my_yaml_file,
    params=hyperparameter_defaults,
)

with wandb.init(config=config_dictionary) as run:
    ...

TensorFlow v1 flags

You can pass TensorFlow flags into the wandb.Run.config object directly.

with wandb.init() as run:
    run.config.epochs = 4

    flags = tf.app.flags
    flags.DEFINE_string("data_dir", "/tmp/data")
    flags.DEFINE_integer("batch_size", 128, "Batch size.")
    run.config.update(flags.FLAGS)  # add tensorflow flags as config

1.3 - Projects

Compare versions of your model, explore results in a scratch workspace, and export findings to a report to save notes and visualizations

A project is a central location where you visualize results, compare experiments, view and download artifacts, create an automation, and more.

Each project contains the following tabs:

  • Overview: snapshot of your project
  • Workspace: personal visualization sandbox
  • Runs: A table that lists all the runs in your project
  • Automations: Automations configured in your project
  • Sweeps: automated exploration and optimization
  • Reports: saved snapshots of notes, runs, and graphs
  • Artifacts: Contains all runs and the artifacts associated with that run

Overview tab

  • Project name: The name of the project. W&B creates a project for you when you initialize a run with the name you provide for the project field. You can change the name of the project at any time by selecting the Edit button in the upper right corner.
  • Description: A description of the project.
  • Project visibility: The visibility of the project. The visibility setting that determines who can access it. See Project visibility for more information.
  • Last active: Timestamp of the last time data is logged to this project
  • Owner: The entity that owns this project
  • Contributors: The number of users that contribute to this project
  • Total runs: The total number of runs in this project
  • Total compute: we add up all the run times in your project to get this total
  • Undelete runs: Click the dropdown menu and click “Undelete all runs” to recover deleted runs in your project.
  • Delete project: click the dot menu in the right corner to delete a project

View a live example

Project overview tab

To change a project’s privacy from the Overview tab:

  1. In the W&B App, from any page in the project, click Overview in the left navigation.

  2. At the top right, click Edit.

  3. Choose a new value for Project visibility:

    • Team (default): Only your team can view and edit the project.

    • Restricted: Only invited members can access the project, and public access is turned off.

    • Open: Anyone can submit runs or create reports, but only your team can edit it. Appropriate only for classroom settings, public benchmark competitions, or other non-durable contexts.

    • Public: Anyone can view the project, but only your team can edit it.

1. Click **Save**.

If you update a project to a more strict privacy setting, you may need to re-invite individual users to restore their ability to access the project.

Workspace tab

A project’s workspace gives you a personal sandbox to compare experiments. Use projects to organize models that can be compared, working on the same problem with different architectures, hyperparameters, datasets, preprocessing etc.

Runs Sidebar: list of all the runs in your project.

  • Dot menu: hover over a row in the sidebar to see the menu appear on the left side. Use this menu to rename a run, delete a run, or stop and active run.
  • Visibility icon: click the eye to turn on and off runs on graphs
  • Color: change the run color to another one of our presets or a custom color
  • Search: search runs by name. This also filters visible runs in the plots.
  • Filter: use the sidebar filter to narrow down the set of runs visible
  • Group: select a config column to dynamically group your runs, for example by architecture. Grouping makes plots show up with a line along the mean value, and a shaded region for the variance of points on the graph.
  • Sort: pick a value to sort your runs by, for example runs with the lowest loss or highest accuracy. Sorting will affect which runs show up on the graphs.
  • Expand button: expand the sidebar into the full table
  • Run count: the number in parentheses at the top is the total number of runs in the project. The number (N visualized) is the number of runs that have the eye turned on and are available to be visualized in each plot. In the example below, the graphs are only showing the first 10 of 183 runs. Edit a graph to increase the max number of runs visible.

If you pin, hide, or change the order of columns in the Runs tab, the Runs sidebar reflects these customizations.

Panels layout: use this scratch space to explore results, add and remove charts, and compare versions of your models based on different metrics

View a live example

Project workspace

Add a section of panels

Click the section dropdown menu and click “Add section” to create a new section for panels. You can rename sections, drag them to reorganize them, and expand and collapse sections.

Each section has options in the upper right corner:

  • Add section: Add a section above or below from the dropdown menu, or click the button at the bottom of the page to add a new section.
  • Rename section: Change the title for your section.
  • Export section to report: Save this section of panels to a new report.
  • Delete section: Remove the whole section and all the charts. This can be undone with the undo button at the bottom of the page in the workspace bar.
  • Add panel: Click the plus button to add a panel to the section.
Adding workspace section

Move panels between sections

Drag and drop panels to reorder and organize into sections. You can also click the “Move” button in the upper right corner of a panel to select a section to move the panel to.

Moving panels between sections

Resize panels

All panels maintain the same size, and there are pages of panels. You can resize the panels by clicking and dragging the lower right corner. Resize the section by clicking and dragging the lower right corner of the section.

Resizing panels

Search for metrics

Use the search box in the workspace to filter down the panels. This search matches the panel titles, which are by default the name of the metrics visualized.

Workspace search

Runs tab

Use the Runs tab to filter, group, and sort your runs.

Runs table

The proceeding tabs demonstrate some common actions you can take in the Runs tab.

The Runs tab shows details about runs in the project. It shows a large number of columns by default.

  • To view all visible columns, scroll the page horizontally.
  • To change the order of the columns, drag a column to the left or right.
  • To pin a column, hover over the column name, click the action menu .... that appears, then click Pin column. Pinned columns appear near the left of the page, after the Name column. To unpin a pinned column, choose Unpin column.
  • To hide a column, hover over the column name, click the action menu .... that appears, then click Hide column. To view all columns that are currently hidden, click Columns.
  • To show, hide, pin, and unpin multiple columns at once, click Columns.
    • Click the name of a hidden column to unhide it.
    • Click the name of a visible column to hide it.
    • Click the pin icon next to a visible column to pin it.

Sort all rows in a Table by the value in a given column.

  1. Hover your mouse over the column title. A kebab menu will appear (three vertical docs).
  2. Select on the kebab menu (three vertical dots).
  3. Choose Sort Asc or Sort Desc to sort the rows in ascending or descending order, respectively.
Confident predictions

The preceding image demonstrates how to view sorting options for a Table column called val_acc.

Filter all rows by an expression with the Filter button on the top left of the dashboard.

Incorrect predictions filter

Select Add filter to add one or more filters to your rows. Three dropdown menus will appear. From left to right the filter types are based on: Column name, Operator , and Values

Column name Binary relation Value
Accepted values String =, ≠, ≤, ≥, IN, NOT IN, Integer, float, string, timestamp, null

The expression editor shows a list of options for each term using autocomplete on column names and logical predicate structure. You can connect multiple logical predicates into one expression using “and” or “or” (and sometimes parentheses).

Filtering runs by validation loss The preceding image shows a filter that is based on the `val_loss` column. The filter shows runs with a validation loss less than or equal to 1.

Group all rows by the value in a particular column with the Group by button in a column header.

Error distribution analysis

By default, this turns other numeric columns into histograms showing the distribution of values for that column across the group. Grouping is helpful for understanding higher-level patterns in your data.

Automations tab

Automate downstream actions for versioning artifacts. To create an automation, define trigger events and resulting actions. Actions include executing a webhook or launching a W&B job. For more information, see Automations.

Automation tab

Reports tab

See all the snapshots of results in one place, and share findings with your team.

Reports tab

Sweeps tab

Start a new sweep from your project.

Sweeps tab

Artifacts tab

View all artifacts associated with a project, from training datasets and fine-tuned models to tables of metrics and media.

Overview panel

Artifact overview panel

On the overview panel, you’ll find a variety of high-level information about the artifact, including its name and version, the hash digest used to detect changes and prevent duplication, the creation date, and any aliases. You can add or remove aliases here, take notes on both the version as well as the artifact as a whole.

Metadata panel

Artifact metadata panel

The metadata panel provides access to the artifact’s metadata, which is provided when the artifact is constructed. This metadata might include configuration arguments required to reconstruct the artifact, URLs where more information can be found, or metrics produced during the run which logged the artifact. Additionally, you can see the configuration for the run which produced the artifact as well as the history metrics at the time of logging the artifact.

Usage panel

Artifact usage panel

The Usage panel provides a code snippet for downloading the artifact for use outside of the web app, for example on a local machine. This section also indicates and links to the run which output the artifact and any runs which use the artifact as an input.

Files panel

Artifact files panel

The files panel lists the files and folders associated with the artifact. W&B uploads certain files for a run automatically. For example, requirements.txt shows the versions of each library the run used, and wandb-metadata.json, and wandb-summary.json include information about the run. Other files may be uploaded, such as artifacts or media, depending on the run’s configuration. You can navigate through this file tree and view the contents directly in the W&B web app.

Tables associated with artifacts are particularly rich and interactive in this context. Learn more about using Tables with Artifacts here.

Artifact table view

Lineage panel

Artifact lineage

The lineage panel provides a view of all of the artifacts associated with a project and the runs that connect them to each other. It shows run types as blocks and artifacts as circles, with arrows to indicate when a run of a given type consumes or produces an artifact of a given type. The type of the particular artifact selected in the left-hand column is highlighted.

Click the Explode toggle to view all of the individual artifact versions and the specific runs that connect them.

Action History Audit tab

Action history audit Action history

The action history audit tab shows all of the alias actions and membership changes for a Collection so you can audit the entire evolution of the resource.

Versions tab

Artifact versions tab

The versions tab shows all versions of the artifact as well as columns for each of the numeric values of the Run History at the time of logging the version. This allows you to compare performance and quickly identify versions of interest.

Create a project

You can create a project in the W&B App or programmatically by specifying a project in a call to wandb.init().

In the W&B App, you can create a project from the Projects page or from a team’s landing page.

From the Projects page:

  1. Click the global navigation icon in the upper left. The navigation sidebar opens.
  2. In the Projects section of the navigation, click View all to open the project overview page.
  3. Click Create new project.
  4. Set Team to the name of the team that will own the project.
  5. Specify a name for your project using the Name field.
  6. Set Project visibility, which defaults to Team.
  7. Optionally, provide a Description.
  8. Click Create project.

From a team’s landing page:

  1. Click the global navigation icon in the upper left. The navigation sidebar opens.
  2. In the Teams section of the navigation, click the name of a team to open its landing page.
  3. In the landing page, click Create new project.
  4. Team is automatically set to the team that owns the landing page you were viewing. If necessary, change the team.
  5. Specify a name for your project using the Name field.
  6. Set Project visibility, which defaults to Team.
  7. Optionally, provide a Description.
  8. Click Create project.

To create a project programmatically, specify a project when calling wandb.init(). If the project does not yet exist, it is created automatically, and is owned by the specified entity. For example:

import wandb with wandb.init(entity="<entity>", project="<project_name>") as run: run.log({"accuracy": .95})

Refer to the wandb.init() API reference.

Star a project

Add a star to a project to mark that project as important. Projects that you and your team mark as important with stars appear at the top of your organization’s homepage.

For example, the proceeding image shows two projects that are marked as important, the zoo_experiment and registry_demo. Both projects appear within the top of the organization’s homepage within the Starred projects section. Starred projects section

There are two ways to mark a project as important: within a project’s overview tab or within your team’s profile page.

  1. Navigate to your W&B project on the W&B App at https://wandb.ai/<team>/<project-name>.
  2. Select the Overview tab from the project sidebar.
  3. Choose the star icon in the upper right corner next to the Edit button.
Star project from overview
  1. Navigate to your team’s profile page at https://wandb.ai/<team>/projects.
  2. Select the Projects tab.
  3. Hover your mouse next to the project you want to star. Click on star icon that appears.

For example, the proceeding image shows the star icon next to the “Compare_Zoo_Models” project. Star project from team page

Confirm that your project appears on the landing page of your organization by clicking on the organization name in the top left corner of the app.

Delete a project

You can delete your project by clicking the three dots on the right of the overview tab.

Delete project workflow

If the project is empty, you can delete it by clicking the dropdown menu in the top-right and selecting Delete project.

Delete empty project

Add notes to a project

Add notes to your project either as a description overview or as a markdown panel within your workspace.

Add description overview to a project

Descriptions you add to your page appear in the Overview tab of your profile.

  1. Navigate to your W&B project
  2. Select the Overview tab from the project sidebar
  3. Choose Edit in the upper right hand corner
  4. Add your notes in the Description field
  5. Select the Save button

Add notes to run workspace

  1. Navigate to your W&B project
  2. Select the Workspace tab from the project sidebar
  3. Choose the Add panels button from the top right corner
  4. Select the TEXT AND CODE dropdown from the modal that appears
  5. Select Markdown
  6. Add your notes in the markdown panel that appears in your workspace