run.log_artifact()
. You can also track datasets in a remote filesystem (e.g. cloud storage in S3 or GCP) by reference, using a link or URI instead of the raw contents."resnet50"
, "bert"
, "stacked_lstm"
) and a name ("my_resnet50_variant_with_attention"
). When you log the same name again, W&B automatically creates a new version of the artifact with the latest contents. You can use artifact versions to checkpoint models during training — just log a new model file to the same name at each checkpoint."baseline"
, "best"
, or "production"
to highlight the important versions in a lineage of experiments and developed models.inceptionV3
."cnn_model"
vs "rnn_model"
, "ppo_agent"
vs "dqn_agent"
) while names could capture more detail ("cnn_5conv_2fc"
, "ppo_lr_3e4_lmda_0.95_y_0.97"
, etc). Checkpoint your model as versions of the artifact under the same name to easily organize and track your work. From your code or browser, you can associate individual checkpoints with descriptive notes or tags and access any experiment runs which use that particular model checkpoint (for inference, fine-tuning, etc). When creating the artifact, you can also upload associated metadata as a key-value dictionary (e.g. hyperparameter values, experiment settings, or longer descriptive text)."baseline"
, "production"
, "ablation"
, or any other custom tag, from the W&B UI or from your code. You can also add longer notes or dictionary-style metadata elsewhere."latest"
alias always points to the best model version of that type"resnet_model"
vs "inceptionV3_model"
, "a2c_agent"
vs "a3c_agent"
). Different model artifacts within the type then have different names. For a given named model artifact, we recommend that the artifact's versions correspond to consecutive model checkpoints [1]"prod_ready"
, "SOTA"
, or "baseline"
to standardize models across your team. These will reliably return the same model checkpoint files, facilitating more scalable and reproducible workflows across file systems, environments, hardware, user accounts, etc.