Skip to main content

Artifacts

Use W&B Artifacts to track and version data as the inputs and outputs of your W&B Runs. For example, a model training run might take in a dataset as input and produce a trained model as output. In addition to logging hyperparameters, metadata and metrics to a run, you can use an artifact to log, track and version the dataset used to train the model as input and another artifact for the resulting model checkpoints as outputs.

Use casesโ€‹

You can use artifacts throughout your entire ML workflow as inputs and outputs of runs. You can use datasets, models, or even other artifacts as inputs for processing.

Use CaseInputOutput
Model TrainingDataset (training and validation data)Trained Model
Dataset Pre-ProcessingDataset (raw data)Dataset (pre-processed data)
Model EvaluationModel + Dataset (test data)W&B Table
Model OptimizationModelOptimized Model
note

The proceeding code snippets are meant to be run in order.

Create an artifactโ€‹

Create an artifact with four lines of code:

  1. Create a W&B run.
  2. Create an artifact object with the wandb.Artifact API.
  3. Add one or more files, such as a model file or dataset, to your artifact object.
  4. Log your artifact to W&B.

For example, the proceeding code snippet shows how to log a file called dataset.h5 to an artifact called example_artifact:

import wandb

run = wandb.init(project = "artifacts-example", job_type = "add-dataset")
artifact = wandb.Artifact(name = "example_artifact", type = "dataset")
artifact.add_file(local_path = "./dataset.h5", name = "training_dataset")
artifact.save()

# Logs the artifact version "my_data" as a dataset with data from dataset.h5
tip

See the track external files page for information on how to add references to files or directories stored in external object storage, like an Amazon S3 bucket.

Download an artifactโ€‹

Indicate the artifact you want to mark as input to your run with the use_artifact method.

Following the preceding code snippet, this next code block shows how to use the training_dataset artifact:

artifact = run.use_artifact("training_dataset:latest") #returns a run object using the "my_data" artifact

This returns an artifact object.

Next, use the returned object to download all contents of the artifact:

datadir = artifact.download() #downloads the full "my_data" artifact to the default directory.
tip

You can pass a custom path into the root parameter to download an artifact to a specific directory. For alternate ways to download artifacts and to see additional parameters, see the guide on downloading and using artifacts

Next stepsโ€‹

Was this page helpful?๐Ÿ‘๐Ÿ‘Ž