Model Management Walkthrough
In this walkthrough you'll learn how to use Weights & Biases for Model Management. Track, visualize, and report on the complete production model workflow.
- Model Versioning: Save and restore every version of your model & learned parameters - organize versions by use case and objective. Track training metrics, assign custom metadata, and document rich markdown descriptions of your models.
- Model Lineage: Track the exact code, hyperparameters, & training dataset used to produce the model. Enable model reproducibility.
- Model Lifecycle: Promote promising models to positions like "staging" or "production" - allowing downstream users to fetch the best model automatically. Communicate progress collaboratively in Reports.
We are actively building new Model Management features. Please reach out with questions or suggestions at support@wandb.com.
Please see the Artifact Tab details for a discussion of all content available in the Model Registry!
Workflow
Now we will walk through a canonical workflow for producing, organizing, and consuming trained models:
- Create a new Registered Model
- Train & log Model Versions
- Link Model Versions to the Registered Model
- Using a Model Version
- Evaluate Model Performance
- Promote a Version to Production
- Use the Production Model for Inference
- Build a Reporting Dashboard
A companion colab notebook is provided which covers step 2-3 in the first code block and steps 4-6 in the second code block.
1. Create a new Registered Model
First, create a Registered Model to hold all the candidate models for your particular modeling task. In this tutorial, we will use the classic MNIST Dataset - 28x28 grayscale input images with output classes from 0-9. The video below demonstrates how to create a new Registered Model.
- Using Model Registry
- Using Artifact Browser
- Programmatic Linking
- Visit your Model Registry at wandb.ai/registry/model (linked from homepage).
- Click the
Create Registered Model
button at the top of the Model Registry.
- Make sure the
Owning Entity
andOwning Project
are set correctly to the values you desire. Enter a unique name for your new Registered Model that describes the modeling task or use-case of interest.
- Visit your Project's Artifact Browser:
wandb.ai/<entity>/<project>/artifacts
- Click the
+
icon on the bottom of the Artifact Browser Sidebar - Select
Type: model
,Style: Collection
, and enter a name. In our caseMNIST Grayscale 28x28
. Remember, a Collection should map to a modeling task - enter a unique name that describes the use case.
If you already have a logged model version, you can link directly to a registered model from the SDK. If the registered model you specify doesn't exist, we will created it for you.
While manual linking is useful for one-off Models, it is often useful to programmatically link Model Versions to a Collection - consider a nightly job or CI pipeline that wants to link the best Model Version from every training job. Depending on your context and use case, you may use one of 3 different linking APIs:
Fetch Model Artifact from Public API:
import wandb
# Fetch the Model Version via API
art = wandb.Api().artifact(...)
# Link the Model Version to the Model Collection
art.link("[[entity/]project/]collectionName")
Model Artifact is "used" by the current Run:
import wandb
# Initialize a W&B run to start tracking
wandb.init()
# Obtain a reference to a Model Version
art = wandb.use_artifact(...)
# Link the Model Version to the Model Collection
art.link("[[entity/]project/]collectionName")
Model Artifact is logged by the current Run:
import wandb
# Initialize a W&B run to start tracking
wandb.init()
# Create an Model Version
art = wandb.Artifact(...)
# Log the Model Version
wandb.log_artifact(art)
# Link the Model Version to the Collection
wandb.run.link_artifact(art, "[[entity/]project/]collectionName")
2. Train & log Model Versions
Next, you will log a model from your training script:
- (Optional) Declare your dataset as a dependency so that it is tracked for reproducibility and audibility
- Serialize your model to disk periodically (and/or at the end of training) using the serialization process provided by your modeling library (eg PyTorch & Keras).
- Add your model files to an Artifact of type "model"
- Note: We use the name
f'mnist-nn-{wandb.run.id}'
. While not required, it is advisable to name-space your "draft" Artifacts with the Run id in order to stay organized
- Note: We use the name
- (Optional) Log training metrics associated with the performance of your model during training.
- Note: The data logged immediately before logging your Model Version will automatically be associated with that version
- Log your model
- Note: If you are logging multiple versions, it is advisable to add an alias of "best" to your Model Version when it outperforms the prior versions. This will make it easy to find the model with peak performance - especially when the tail end of training may overfit!
By default, you should use the native W&B Artifacts API to log your serialized model. However, since this pattern is so common, we have provided a single method which combines serialization, Artifact creation, and logging. See the "(Beta) Using log_model
" tab for details.
- Using Artifacts
- Declare Dataset Dependency
- [Beta] Using `log_model()`
import wandb
# Always initialize a W&B run to start tracking
wandb.init()
# (Optional) Declare an upstream dataset dependency
# see the `Declare Dataset Dependency` tab for
# alternative examples.
dataset = wandb.use_artifact("mnist:latest")
# At the end of every epoch (or at the end of your script)...
# ... Serialize your model
model.save("path/to/model.pt")
# ... Create a Model Version
art = wandb.Artifact(f'mnist-nn-{wandb.run.id}', type="model")
# ... Add the serialized files
art.add_file("path/to/model.pt", "model.pt")
# (optional) Log training metrics
wandb.log({"train_loss": 0.345, "val_loss": 0.456})
# ... Log the Version
if model_is_best:
# If the model is the best model so far, add "best" to the aliases
wandb.log_artifact(art, aliases=["latest", "best"])
else:
wandb.log_artifact(art)
If you would like to track your training data, you can declare a dependency by calling wandb.use_artifact
on your dataset. Here are 3 examples of how you can declare a dataset dependency:
Dataset stored in W&B
dataset = wandb.use_artifact("[[entity/]project/]name:alias")
Dataset stored on Local Filesystem
art = wandb.Artifact("dataset_name", "dataset")
art.add_dir("path/to/data") # or art.add_file("path/to/data.csv")
dataset = wandb.use_artifact(art)
Dataset stored on Remote Bucket
art = wandb.Artifact("dataset_name", "dataset")
art.add_reference("s3://path/to/data")
dataset = wandb.use_artifact(art)
The following code snippet leverages actively developed beta
APIs and therefore is subject to change and not guaranteed to be backwards compatible.
from wandb.beta.workflows import log_model
# (Optional) Declare an upstream dataset dependency
# see the `Declare Dataset Dependency` tab for
# alternative examples.
dataset = wandb.use_artifact("mnist:latest")
# (optional) Log training metrics
wandb.log({"train_loss": 0.345, "val_loss": 0.456})
# This one method will serialize the model, start a run, create a version
# add the files to the version, and log the version. You can override
# the default name, project, aliases, metadata, and more!
log_model(model, "mnist-nn", aliases=["best"] if model_is_best else [])
Note: you may want to define custom serialization and deserialization strategies. You can do so by subclassing the _SavedModel
class, similar to the _PytorchSavedModel
class. All subclasses will automatically be loaded into the serialization registry. As this is a beta feature, please reach out to support@wandb.com with questions or comments.
After logging 1 or more Model Versions, you will notice that your will have a new Model Artifact in your Artifact Browser. Here, we can see the results of logging 5 versions to an artifact named mnist_nn-1r9jjogr
.
If you are following along the example notebook, you should see a Run Workspace with charts similar to the image below
3. Link Model Versions to the Registered Model
Now, let's say that we are ready to link one of our Model Versions to the Registered Model. We can accomplish this manually as well as via an API.
- Manual Linking
- Programmatic Linking
- [Beta] Using `log_model()`
The following video below demonstrates how to manually link a Model Version to your newly created Registered Model:
- Navigate to the Model Version of interest
- Click the link icon
- Select the target Registered Model
- (optional): Add additional aliases
While manual linking is useful for one-off Models, it is often useful to programmatically link Model Versions to a Collection - consider a nightly job or CI pipeline that wants to link the best Model Version from every training job. Depending on your context and use case, you may use one of 3 different linking APIs:
Fetch Model Artifact from Public API:
import wandb
# Fetch the Model Version via API
art = wandb.Api().artifact(...)
# Link the Model Version to the Model Collection
art.link("[[entity/]project/]collectionName")
Model Artifact is "used" by the current Run:
import wandb
# Initialize a W&B run to start tracking
wandb.init()
# Obtain a reference to a Model Version
art = wandb.use_artifact(...)
# Link the Model Version to the Model Collection
art.link("[[entity/]project/]collectionName")
Model Artifact is logged by the current Run:
import wandb
# Initialize a W&B run to start tracking
wandb.init()
# Create an Model Version
art = wandb.Artifact(...)
# Log the Model Version
wandb.log_artifact(art)
# Link the Model Version to the Collection
wandb.run.link_artifact(art, "[[entity/]project/]collectionName")
The following code snippet leverages actively developed beta
APIs and therefore is subject to change and not guaranteed to be backwards compatible.
In the case that you logged a model with the beta log_model
discussed above, then you can use it's companion method: link_model
from wandb.beta.workflows import log_model, link_model
# Obtain a Model Version
model_version = wb.log_model(model, "mnist_nn")
# Link the Model Version
link_model(model_version, "[[entity/]project/]collectionName")
After you link the Model Version, you will see hyperlinks connecting the Version in the Registered Model to the source Artifact and visa versa.
4. Use a Model Version
Now we are ready to consume a Model - perhaps to evaluate its performance, make predictions against a dataset, or use in a live production context. Similar to logging a Model, you may choose to use the raw Artifact API or the more opinionated beta APIs.
- Using Artifacts
- [Beta] Using `use_model()`
You can load in a Model Version using the use_artifact
method.
import wandb
# Always initialize a W&B run to start tracking
wandb.init()
# Download your Model Version files
path = wandb.use_artifact("[[entity/]project/]collectionName:latest").download()
# Reconstruct your model object in memory:
# `make_model_from_data` below represents your deserialization logic
# to load in a model from disk
model = make_model_from_data(path)
The following code snippet leverages actively developed beta
APIs and therefore is subject to change and not guaranteed to be backwards compatible.
Directly manipulating model files and handling deserialization can be tricky - especially if you were not the one who serialized the model. As a companion to log_model
, use_model
automatically deserializes and reconstructs your model for use.
from wandb.beta.workflows import use_model
model = use_model("[[entity/]project/]collectionName").model_obj()
5. Evaluate Model Performance
After training many Models, you will likely want to evaluate the performance of those models. In most circumstances you will have some held-out data which serves as a test dataset, independent of the dataset your models have access to during training. To evaluate a Model Version, you will want to first complete step 4 above to load a model into memory. Then:
- (Optional) Declare a data dependency to your evaluation data
- Log metrics, media, tables, and anything else useful for evaluation
# ... continuation from 4
# (Optional) Declare an upstream evaluation dataset dependency
dataset = wandb.use_artifact("mnist-evaluation:latest")
# Evaluate your model according to your use-case
loss, accuracy, predictions = evaluate_model(model, dataset)
# Log out metrics, images, tables, or any data useful for evaluation.
wandb.log({"loss": loss, "accuracy": accuracy, "predictions": predictions})
If you are executing similar code, as demonstrated in the notebook, you should see a workspace similar to the image below - here we even show model predictions against the test data!
6. Promote a Version to Production
Next, you will likely want to denote which version in the Registered Model is intended to be used for Production. Here, we use the concept of aliases. Each Registered Model can have any aliases which make sense for your use case - however we often see production
as the most common alias. Each alias can only be assigned to a single Version at a time.
- with UI Interface
- with API
Follow steps in Part 3. Link Model Versions to the Collection and add the aliases you want to the aliases
parameter.
The image below shows the new production
alias added to v1 of the Registered Model!
7. Consume the Production Model
wandb.use_artifact("[[entity/]project/]registeredModelName:production")
You can reference a Version within the Registered Model using different alias strategies:
latest
- which will fetch the most recently linked Versionv#
- usingv0
,v1
,v2
, ... you can fetch a specific version in the Registered Modelproduction
- you can use any custom alias that you and your team have assigned
8. Build a Reporting Dashboard
Using Weave Panels, you can display any of the Model Registry/Artifact views inside of Reports! See a demo here. Below is a full-page screenshot of an example Model Dashboard.