This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Reference
Generated documentation for Weights & Biases APIs
These docs are automatically generated from the wandb
library.
Reference sections
- Python Library: Add
wandb
to your script to capture metrics and save artifacts
- Command Line Interface: Log in, run jobs, execute sweeps, and more using shell commands
- JavaScript Library: A beta JavaScript/TypeScript client to track metrics from your Node server
- Query panels: A beta query language to select and aggregate data
Examples and guides
Our examples repo has scripts and colabs to try W&B features, and see integrations with various libraries.
Our developer guide has guides, tutorials, and FAQs for the various W&B products.
1 - Command Line Interface
Usage
wandb [OPTIONS] COMMAND [ARGS]...
Options
Option |
Description |
–version |
Show the version and exit. |
Commands
Command |
Description |
agent |
Run the W&B agent |
artifact |
Commands for interacting with artifacts |
beta |
Beta versions of wandb CLI commands. |
controller |
Run the W&B local sweep controller |
disabled |
Disable W&B. |
docker |
Run your code in a docker container. |
docker-run |
Wrap docker run and adds WANDB_API_KEY and WANDB_DOCKER… |
enabled |
Enable W&B. |
init |
Configure a directory with Weights & Biases |
job |
Commands for managing and viewing W&B jobs |
launch |
Launch or queue a W&B Job. |
launch-agent |
Run a W&B launch agent. |
launch-sweep |
Run a W&B launch sweep (Experimental). |
login |
Login to Weights & Biases |
offline |
Disable W&B sync |
online |
Enable W&B sync |
pull |
Pull files from Weights & Biases |
restore |
Restore code, config and docker state for a run |
scheduler |
Run a W&B launch sweep scheduler (Experimental) |
server |
Commands for operating a local W&B server |
status |
Show configuration settings |
sweep |
Initialize a hyperparameter sweep. |
sync |
Upload an offline training directory to W&B |
verify |
Verify your local instance |
1.1 - wandb agent
Usage
wandb agent [OPTIONS] SWEEP_ID
Summary
Run the W&B agent
Options
Option |
Description |
-p, –project |
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’. |
-e, –entity |
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username. |
–count |
The max number of runs for this agent. |
1.2 - wandb artifact
Usage
wandb artifact [OPTIONS] COMMAND [ARGS]...
Summary
Commands for interacting with artifacts
Options
Commands
Command |
Description |
cache |
Commands for interacting with the artifact cache |
get |
Download an artifact from wandb |
ls |
List all artifacts in a wandb project |
put |
Upload an artifact to wandb |
1.2.1 - wandb artifact cache
Usage
wandb artifact cache [OPTIONS] COMMAND [ARGS]...
Summary
Commands for interacting with the artifact cache
Options
Commands
Command |
Description |
cleanup |
Clean up less frequently used files from the artifacts cache |
1.2.1.1 - wandb artifact cache cleanup
Usage
wandb artifact cache cleanup [OPTIONS] TARGET_SIZE
Summary
Clean up less frequently used files from the artifacts cache
Options
Option |
Description |
–remove-temp / –no-remove-temp |
Remove temp files |
1.2.2 - wandb artifact get
Usage
wandb artifact get [OPTIONS] PATH
Summary
Download an artifact from wandb
Options
Option |
Description |
–root |
The directory you want to download the artifact to |
–type |
The type of artifact you are downloading |
1.2.3 - wandb artifact ls
Usage
wandb artifact ls [OPTIONS] PATH
Summary
List all artifacts in a wandb project
Options
Option |
Description |
-t, –type |
The type of artifacts to list |
1.2.4 - wandb artifact put
Usage
wandb artifact put [OPTIONS] PATH
Summary
Upload an artifact to wandb
Options
Option |
Description |
-n, –name |
The name of the artifact to push: project/artifact_name |
-d, –description |
A description of this artifact |
-t, –type |
The type of the artifact |
-a, –alias |
An alias to apply to this artifact |
–id |
The run you want to upload to. |
–resume |
Resume the last run from your current directory. |
–skip_cache |
Skip caching while uploading artifact files. |
–policy [mutable |
immutable] |
1.3 - wandb beta
Usage
wandb beta [OPTIONS] COMMAND [ARGS]...
Summary
Beta versions of wandb CLI commands. Requires wandb-core.
Options
Commands
Command |
Description |
sync |
Upload a training run to W&B |
1.3.1 - wandb beta sync
Usage
wandb beta sync [OPTIONS] WANDB_DIR
Summary
Upload a training run to W&B
Options
Option |
Description |
–id |
The run you want to upload to. |
-p, –project |
The project you want to upload to. |
-e, –entity |
The entity to scope to. |
–skip-console |
Skip console logs |
–append |
Append run |
-i, –include |
Glob to include. Can be used multiple times. |
-e, –exclude |
Glob to exclude. Can be used multiple times. |
–mark-synced / –no-mark-synced |
Mark runs as synced |
–skip-synced / –no-skip-synced |
Skip synced runs |
–dry-run |
Perform a dry run without uploading anything. |
1.4 - wandb controller
Usage
wandb controller [OPTIONS] SWEEP_ID
Summary
Run the W&B local sweep controller
Options
Option |
Description |
–verbose |
Display verbose output |
1.5 - wandb disabled
Usage
wandb disabled [OPTIONS]
Summary
Disable W&B.
Options
Option |
Description |
–service |
Disable W&B service [default: True] |
1.6 - wandb docker
Usage
wandb docker [OPTIONS] [DOCKER_RUN_ARGS]... [DOCKER_IMAGE]
Summary
Run your code in a docker container.
W&B docker lets you run your code in a docker image ensuring wandb is
configured. It adds the WANDB_DOCKER and WANDB_API_KEY environment variables
to your container and mounts the current directory in /app by default. You
can pass additional args which will be added to docker run
before the
image name is declared, we’ll choose a default image for you if one isn’t
passed:
wandb docker -v /mnt/dataset:/app/data wandb docker gcr.io/kubeflow-
images-public/tensorflow-1.12.0-notebook-cpu:v0.4.0 --jupyter wandb docker
wandb/deepo:keras-gpu --no-tty --cmd "python train.py --epochs=5"
By default, we override the entrypoint to check for the existence of wandb
and install it if not present. If you pass the –jupyter flag we will
ensure jupyter is installed and start jupyter lab on port 8888. If we
detect nvidia-docker on your system we will use the nvidia runtime. If you
just want wandb to set environment variable to an existing docker run
command, see the wandb docker-run command.
Options
Option |
Description |
–nvidia / –no-nvidia |
Use the nvidia runtime, defaults to nvidia if nvidia-docker is present |
–digest |
Output the image digest and exit |
–jupyter / –no-jupyter |
Run jupyter lab in the container |
–dir |
Which directory to mount the code in the container |
–no-dir |
Don’t mount the current directory |
–shell |
The shell to start the container with |
–port |
The host port to bind jupyter on |
–cmd |
The command to run in the container |
–no-tty |
Run the command without a tty |
1.7 - wandb docker-run
Usage
wandb docker-run [OPTIONS] [DOCKER_RUN_ARGS]...
Summary
Wrap docker run
and adds WANDB_API_KEY and WANDB_DOCKER environment
variables.
This will also set the runtime to nvidia if the nvidia-docker executable is
present on the system and –runtime wasn’t set.
See docker run --help
for more details.
Options
1.8 - wandb enabled
Usage
wandb enabled [OPTIONS]
Summary
Enable W&B.
Options
Option |
Description |
–service |
Enable W&B service [default: True] |
1.9 - wandb import
Usage
wandb import [OPTIONS] COMMAND [ARGS]...
Summary
Commands for importing data from other systems
Options
Commands
Command |
Description |
mlflow |
Import from MLFlow |
1.9.1 - wandb import mlflow
Usage
wandb import mlflow [OPTIONS]
Summary
Import from MLFlow
Options
Option |
Description |
–mlflow-tracking-uri |
MLFlow Tracking URI |
–target-entity |
Override default entity to import data into [required] |
–target-project |
Override default project to import data into [required] |
1.10 - wandb init
Usage
wandb init [OPTIONS]
Summary
Configure a directory with Weights & Biases
Options
Option |
Description |
-p, –project |
The project to use. |
-e, –entity |
The entity to scope the project to. |
–reset |
Reset settings |
-m, –mode |
Can be online , offline or disabled . Defaults to online . |
1.11 - wandb job
Usage
wandb job [OPTIONS] COMMAND [ARGS]...
Summary
Commands for managing and viewing W&B jobs
Options
Commands
Command |
Description |
create |
Create a job from a source, without a wandb run. |
describe |
Describe a launch job. |
list |
List jobs in a project |
1.11.1 - wandb job create
Usage
wandb job create [OPTIONS] {git|code|image} PATH
Summary
Create a job from a source, without a wandb run.
Jobs can be of three types, git, code, or image.
git: A git source, with an entrypoint either in the path or provided
explicitly pointing to the main python executable. code: A code path,
containing a requirements.txt file. image: A docker image.
Options
Option |
Description |
-p, –project |
The project you want to list jobs from. |
-e, –entity |
The entity the jobs belong to |
-n, –name |
Name for the job |
-d, –description |
Description for the job |
-a, –alias |
Alias for the job |
–entry-point |
Entrypoint to the script, including an executable and an entrypoint file. Required for code or repo jobs. If –build-context is provided, paths in the entrypoint command will be relative to the build context. |
-g, –git-hash |
Commit reference to use as the source for git jobs |
-r, –runtime |
Python runtime to execute the job |
-b, –build-context |
Path to the build context from the root of the job source code. If provided, this is used as the base path for the Dockerfile and entrypoint. |
–base-image |
Base image to use for the job. Incompatible with image jobs. |
–dockerfile |
Path to the Dockerfile for the job. If –build- context is provided, the Dockerfile path will be relative to the build context. |
1.11.2 - wandb job describe
Usage
wandb job describe [OPTIONS] JOB
Summary
Describe a launch job. Provide the launch job in the form of:
entity/project/job-name:alias-or-version
Options
1.11.3 - wandb job list
Usage
wandb job list [OPTIONS]
Summary
List jobs in a project
Options
Option |
Description |
-p, –project |
The project you want to list jobs from. |
-e, –entity |
The entity the jobs belong to |
1.12 - wandb launch
Usage
wandb launch [OPTIONS]
Summary
Launch or queue a W&B Job. See https://wandb.me/launch
Options
Option |
Description |
-u, –uri (str) |
Local path or git repo uri to launch. If provided this command will create a job from the specified uri. |
-j, –job (str) |
Name of the job to launch. If passed in, launch does not require a uri. |
–entry-point |
Entry point within project. [default: main]. If the entry point is not found, attempts to run the project file with the specified name as a script, using ‘python’ to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files. If passed in, will override the entrypoint value passed in using a config file. |
–build-context (str) |
Path to the build context within the source code. Defaults to the root of the source code. Compatible only with -u. |
–name |
Name of the run under which to launch the run. If not specified, a random run name will be used to launch run. If passed in, will override the name passed in using a config file. |
-e, –entity (str) |
Name of the target entity which the new run will be sent to. Defaults to using the entity set by local wandb/settings folder. If passed in, will override the entity value passed in using a config file. |
-p, –project (str) |
Name of the target project which the new run will be sent to. Defaults to using the project name given by the source uri or for github runs, the git repo name. If passed in, will override the project value passed in using a config file. |
-r, –resource |
Execution resource to use for run. Supported values: ’local-process’, ’local-container’, ‘kubernetes’, ‘sagemaker’, ‘gcp-vertex’. This is now a required parameter if pushing to a queue with no resource configuration. If passed in, will override the resource value passed in using a config file. |
-d, –docker-image |
Specific docker image you’d like to use. In the form name:tag. If passed in, will override the docker image value passed in using a config file. |
–base-image |
Docker image to run job code in. Incompatible with –docker-image. |
-c, –config |
Path to JSON file (must end in ‘.json’) or JSON string which will be passed as a launch config. Dictation how the launched run will be configured. |
-v, –set-var |
Set template variable values for queues with allow listing enabled, as key-value pairs. Examples: --set-var key1=value1 --set-var key2=value2 |
-q, –queue |
Name of run queue to push to. If none, launches single run directly. If supplied without an argument (--queue ), defaults to queue default . Otherwise, if you supply a queue by name, the queue must exist under the project and entity supplied. |
–async |
Flag to run the job asynchronously. Defaults to false. Unless --async is set, wandb launch waits for the job to finish. This option is incompatible with --queue . Set asynchronous options on wandb launch-agent when running with an agent. |
–resource-args |
Path to JSON file (must end in ‘.json’) or JSON string which will be passed as resource args to the compute resource. The exact content which should be provided is different for each execution backend. See documentation for layout of this file. |
–dockerfile |
Path to the Dockerfile used to build the job, relative to the job’s root |
–priority [critical |
high |
1.13 - wandb launch-agent
Usage
wandb launch-agent [OPTIONS]
Summary
Run a W&B launch agent.
Options
Option |
Description |
-q, –queue |
The name of a queue for the agent to watch. Multiple -q flags supported. |
-e, –entity |
The entity to use. Defaults to current logged-in user |
-l, –log-file |
Destination for internal agent logs. Use - for stdout. By default all agents logs will go to debug.log in your wandb/ subdirectory or WANDB_DIR if set. |
-j, –max-jobs |
The maximum number of launch jobs this agent can run in parallel. Defaults to 1. Set to -1 for no upper limit |
-c, –config |
path to the agent config yaml to use |
-v, –verbose |
Display verbose output |
1.14 - wandb launch-sweep
Usage
wandb launch-sweep [OPTIONS] [CONFIG]
Summary
Run a W&B launch sweep (Experimental).
Options
Option |
Description |
-q, –queue |
The name of a queue to push the sweep to |
-p, –project |
Name of the project which the agent will watch. If passed in, will override the project value passed in using a config file |
-e, –entity |
The entity to use. Defaults to current logged-in user |
-r, –resume_id |
Resume a launch sweep by passing an 8-char sweep id. Queue required |
–prior_run |
ID of an existing run to add to this sweep |
1.15 - wandb login
Usage
wandb login [OPTIONS] [KEY]...
Summary
Login to Weights & Biases
Options
Option |
Description |
–cloud |
Login to the cloud instead of local |
–host |
Login to a specific instance of W&B |
–relogin |
Force relogin if already logged in. |
–anonymously |
Log in anonymously |
–verify |
Verify login credentials |
1.16 - wandb offline
Usage
wandb offline [OPTIONS]
Summary
Disable W&B sync
Options
1.17 - wandb online
Usage
wandb online [OPTIONS]
Summary
Enable W&B sync
Options
1.18 - wandb pull
Usage
wandb pull [OPTIONS] RUN
Summary
Pull files from Weights & Biases
Options
Option |
Description |
-p, –project |
The project you want to download. |
-e, –entity |
The entity to scope the listing to. |
1.19 - wandb restore
Usage
wandb restore [OPTIONS] RUN
Summary
Restore code, config and docker state for a run
Options
Option |
Description |
–no-git |
Don’t restore git state |
–branch / –no-branch |
Whether to create a branch or checkout detached |
-p, –project |
The project you wish to upload to. |
-e, –entity |
The entity to scope the listing to. |
1.20 - wandb scheduler
Usage
wandb scheduler [OPTIONS] SWEEP_ID
Summary
Run a W&B launch sweep scheduler (Experimental)
Options
1.21 - wandb server
Usage
wandb server [OPTIONS] COMMAND [ARGS]...
Summary
Commands for operating a local W&B server
Options
Commands
Command |
Description |
start |
Start a local W&B server |
stop |
Stop a local W&B server |
1.21.1 - wandb server start
Usage
wandb server start [OPTIONS]
Summary
Start a local W&B server
Options
Option |
Description |
-p, –port |
The host port to bind W&B server on |
-e, –env |
Env vars to pass to wandb/local |
–daemon / –no-daemon |
Run or don’t run in daemon mode |
1.21.2 - wandb server stop
Usage
wandb server stop [OPTIONS]
Summary
Stop a local W&B server
Options
1.22 - wandb status
Usage
wandb status [OPTIONS]
Summary
Show configuration settings
Options
Option |
Description |
–settings / –no-settings |
Show the current settings |
1.23 - wandb sweep
Usage
wandb sweep [OPTIONS] CONFIG_YAML_OR_SWEEP_ID
Summary
Initialize a hyperparameter sweep. Search for hyperparameters that optimizes
a cost function of a machine learning model by testing various combinations.
Options
Option |
Description |
-p, –project |
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled Uncategorized. |
-e, –entity |
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username. |
–controller |
Run local controller |
–verbose |
Display verbose output |
–name |
The name of the sweep. The sweep ID is used if no name is specified. |
–program |
Set sweep program |
–update |
Update pending sweep |
–stop |
Finish a sweep to stop running new runs and let currently running runs finish. |
–cancel |
Cancel a sweep to kill all running runs and stop running new runs. |
–pause |
Pause a sweep to temporarily stop running new runs. |
–resume |
Resume a sweep to continue running new runs. |
–prior_run |
ID of an existing run to add to this sweep |
1.24 - wandb sync
Usage
wandb sync [OPTIONS] [PATH]...
Summary
Upload an offline training directory to W&B
Options
Option |
Description |
–id |
The run you want to upload to. |
-p, –project |
The project you want to upload to. |
-e, –entity |
The entity to scope to. |
–job_type |
Specifies the type of run for grouping related runs together. |
–sync-tensorboard / –no-sync-tensorboard |
Stream tfevent files to wandb. |
–include-globs |
Comma separated list of globs to include. |
–exclude-globs |
Comma separated list of globs to exclude. |
–include-online / –no-include-online |
Include online runs |
–include-offline / –no-include-offline |
Include offline runs |
–include-synced / –no-include-synced |
Include synced runs |
–mark-synced / –no-mark-synced |
Mark runs as synced |
–sync-all |
Sync all runs |
–clean |
Delete synced runs |
–clean-old-hours |
Delete runs created before this many hours. To be used alongside –clean flag. |
–clean-force |
Clean without confirmation prompt. |
–show |
Number of runs to show |
–append |
Append run |
–skip-console |
Skip console logs |
1.25 - wandb verify
Usage
wandb verify [OPTIONS]
Summary
Verify your local instance
Options
Option |
Description |
–host |
Test a specific instance of W&B |
2 - JavaScript Library
The W&B SDK for TypeScript, Node, and modern Web Browsers
Similar to our Python library, we offer a client to track experiments in JavaScript/TypeScript.
- Log metrics from your Node server and display them in interactive plots on W&B
- Debug LLM applications with interactive traces
- Debug LangChain.js usage
This library is compatible with Node and modern JS run times.
You can find the source code for the JavaScript client in the Github repository.
Our JavaScript integration is still in Beta, if you run into issues please let us know.
Installation
npm install @wandb/sdk
# or ...
yarn add @wandb/sdk
###Â Usage
TypeScript/ESM:
import wandb from '@wandb/sdk'
async function track() {
await wandb.init({config: {test: 1}});
wandb.log({acc: 0.9, loss: 0.1});
wandb.log({acc: 0.91, loss: 0.09});
await wandb.finish();
}
await track()
We spawn a separate MessageChannel to process all api calls async. This will cause your script to hang if you don’t call await wandb.finish()
.
Node/CommonJS:
const wandb = require('@wandb/sdk').default;
We’re currently missing a lot of the functionality found in our Python SDK, but basic logging functionality is available. We’ll be adding additional features like Tables soon.
Authentication and Settings
In node environments we look for process.env.WANDB_API_KEY
and prompt for it’s input if we have a TTY. In non-node environments we look for sessionStorage.getItem("WANDB_API_KEY")
. Additional settings can be found here.
Integrations
Our Python integrations are widely used by our community, and we hope to build out more JavaScript integrations to help LLM app builders leverage whatever tool they want.
If you have any requests for additional integrations, we’d love you to open an issue with details about the request.
##Â LangChain.js
This library integrates with the popular library for building LLM applications, LangChain.js version >= 0.0.75.
###Â Usage
import {WandbTracer} from '@wandb/sdk/integrations/langchain';
const wbTracer = await WandbTracer.init({project: 'langchain-test'});
// run your langchain workloads...
chain.call({input: "My prompt"}, wbTracer)
await WandbTracer.finish();
We spawn a seperate MessageChannel to process all api calls async. This will cause your script to hang if you don’t call await WandbTracer.finish()
.
See this test for a more detailed example.
3 - Python Library
Use wandb to track machine learning work.
Train and fine-tune models, manage models from experimentation to production.
For guides and examples, see https://docs.wandb.ai.
For scripts and interactive notebooks, see https://github.com/wandb/examples.
For reference documentation, see https://docs.wandb.com/ref/python.
Classes
class Artifact
: Flexible and lightweight building block for dataset and model versioning.
class Run
: A unit of computation logged by wandb. Typically, this is an ML experiment.
Functions
agent(...)
: Start one or more sweep agents.
controller(...)
: Public sweep controller constructor.
finish(...)
: Finish a run and upload any remaining data.
init(...)
: Start a new run to track and log to W&B.
log(...)
: Upload run data.
login(...)
: Set up W&B login credentials.
save(...)
: Sync one or more files to W&B.
sweep(...)
: Initialize a hyperparameter sweep.
watch(...)
: Hooks into the given PyTorch models to monitor gradients and the model’s computational graph.
Other Members |
|
__version__ |
'0.19.3' |
config |
|
summary |
|
3.1 - agent
Start one or more sweep agents.
agent(
sweep_id: str,
function: Optional[Callable] = None,
entity: Optional[str] = None,
project: Optional[str] = None,
count: Optional[int] = None
) -> None
The sweep agent uses the sweep_id
to know which sweep it
is a part of, what function to execute, and (optionally) how
many agents to run.
Args |
|
sweep_id |
The unique identifier for a sweep. A sweep ID is generated by W&B CLI or Python SDK. |
function |
A function to call instead of the “program” specified in the sweep config. |
entity |
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username. |
project |
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled Uncategorized . |
count |
The number of sweep config trials to try. |
3.2 - Artifact
Flexible and lightweight building block for dataset and model versioning.
Artifact(
name: str,
type: str,
description: (str | None) = None,
metadata: (dict[str, Any] | None) = None,
incremental: bool = (False),
use_as: (str | None) = None
) -> None
Construct an empty W&B Artifact. Populate an artifacts contents with methods that
begin with add
. Once the artifact has all the desired files, you can call
wandb.log_artifact()
to log it.
Args |
|
name |
A human-readable name for the artifact. Use the name to identify a specific artifact in the W&B App UI or programmatically. You can interactively reference an artifact with the use_artifact Public API. A name can contain letters, numbers, underscores, hyphens, and dots. The name must be unique across a project. |
type |
The artifact’s type. Use the type of an artifact to both organize and differentiate artifacts. You can use any string that contains letters, numbers, underscores, hyphens, and dots. Common types include dataset or model . Include model within your type string if you want to link the artifact to the W&B Model Registry. |
description |
A description of the artifact. For Model or Dataset Artifacts, add documentation for your standardized team model or dataset card. View an artifact’s description programmatically with the Artifact.description attribute or programmatically with the W&B App UI. W&B renders the description as markdown in the W&B App. |
metadata |
Additional information about an artifact. Specify metadata as a dictionary of key-value pairs. You can specify no more than 100 total keys. |
Returns |
|
An Artifact object. |
|
Attributes |
|
aliases |
List of one or more semantically friendly references or identifying “nicknames” assigned to an artifact version. Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information. |
collection |
The collection this artifact was retrieved from. A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence. |
commit_hash |
The hash returned when this artifact was committed. |
created_at |
Timestamp when the artifact was created. |
description |
A description of the artifact. |
digest |
The logical digest of the artifact. The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest version, then log_artifact is a no-op. |
entity |
The name of the entity of the secondary (portfolio) artifact collection. |
file_count |
The number of files (including references). |
id |
The artifact’s ID. |
manifest |
The artifact’s manifest. The manifest lists all of its contents, and can’t be changed once the artifact has been logged. |
metadata |
User-defined artifact metadata. Structured data associated with the artifact. |
name |
The artifact name and version in its secondary (portfolio) collection. A string with the format {collection}:{alias} . Before the artifact is saved, contains only the name since the version is not yet known. |
project |
The name of the project of the secondary (portfolio) artifact collection. |
qualified_name |
The entity/project/name of the secondary (portfolio) collection. |
size |
The total size of the artifact in bytes. Includes any references tracked by this artifact. |
source_collection |
The artifact’s primary (sequence) collection. |
source_entity |
The name of the entity of the primary (sequence) artifact collection. |
source_name |
The artifact name and version in its primary (sequence) collection. A string with the format {collection}:{alias} . Before the artifact is saved, contains only the name since the version is not yet known. |
source_project |
The name of the project of the primary (sequence) artifact collection. |
source_qualified_name |
The entity/project/name of the primary (sequence) collection. |
source_version |
The artifact’s version in its primary (sequence) collection. A string with the format v{number} . |
state |
The status of the artifact. One of: PENDING , COMMITTED , or DELETED . |
tags |
List of one or more tags assigned to this artifact version. |
ttl |
The time-to-live (TTL) policy of an artifact. Artifacts are deleted shortly after a TTL policy’s duration passes. If set to None , the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact. |
type |
The artifact’s type. Common types include dataset or model . |
updated_at |
The time when the artifact was last updated. |
version |
The artifact’s version in its secondary (portfolio) collection. |
Methods
add
View source
add(
obj: WBValue,
name: StrPath,
overwrite: bool = (False)
) -> ArtifactManifestEntry
Add wandb.WBValue
obj
to the artifact.
Args |
|
obj |
The object to add. Currently support one of Bokeh , JoinedTable , PartitionedTable , Table , Classes , ImageMask , BoundingBoxes2D , Audio , Image , Video , Html , Object3D |
name |
The path within the artifact to add the object. |
overwrite |
If True, overwrite existing objects with the same file path (if applicable). |
Returns |
|
The added manifest entry |
|
Raises |
|
ArtifactFinalizedError |
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead. |
add_dir
View source
add_dir(
local_path: str,
name: (str | None) = None,
skip_cache: (bool | None) = (False),
policy: (Literal['mutable', 'immutable'] | None) = "mutable"
) -> None
Add a local directory to the artifact.
Args |
|
local_path |
The path of the local directory. |
name |
The subdirectory name within an artifact. The name you specify appears in the W&B App UI nested by artifact’s type . Defaults to the root of the artifact. |
skip_cache |
If set to True , W&B will not copy/move files to the cache while uploading |
policy |
“mutable” |
Raises |
|
ArtifactFinalizedError |
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead. |
ValueError |
Policy must be “mutable” or “immutable” |
add_file
View source
add_file(
local_path: str,
name: (str | None) = None,
is_tmp: (bool | None) = (False),
skip_cache: (bool | None) = (False),
policy: (Literal['mutable', 'immutable'] | None) = "mutable",
overwrite: bool = (False)
) -> ArtifactManifestEntry
Add a local file to the artifact.
Args |
|
local_path |
The path to the file being added. |
name |
The path within the artifact to use for the file being added. Defaults to the basename of the file. |
is_tmp |
If true, then the file is renamed deterministically to avoid collisions. |
skip_cache |
If True , W&B will not copy files to the cache after uploading. |
policy |
By default, set to mutable . If set to mutable , create a temporary copy of the file to prevent corruption during upload. If set to immutable , disable protection and rely on the user not to delete or change the file. |
overwrite |
If True , overwrite the file if it already exists. |
Returns |
|
The added manifest entry. |
|
Raises |
|
ArtifactFinalizedError |
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead. |
ValueError |
Policy must be “mutable” or “immutable” |
add_reference
View source
add_reference(
uri: (ArtifactManifestEntry | str),
name: (StrPath | None) = None,
checksum: bool = (True),
max_objects: (int | None) = None
) -> Sequence[ArtifactManifestEntry]
Add a reference denoted by a URI to the artifact.
Unlike files or directories that you add to an artifact, references are not
uploaded to W&B. For more information,
see Track external files.
By default, the following schemes are supported:
http
or https
: The size and digest of the file will be inferred by the Content-Length
and the ETag
response headers returned by the server.
s3
: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
gs
: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
https
, domain matching *.blob.core.windows.net
(Azure): The checksum and size are be pulled from the blob metadata. If storage account versioning is enabled, then the version ID is also tracked.
file
: The checksum and size are pulled from the file system. This scheme is useful if you have an NFS share or other externally mounted volume containing files you wish to track but not necessarily upload
For any other scheme, the digest is just a hash of the URI and the size is left
blank.
Args |
|
uri |
The URI path of the reference to add. The URI path can be an object returned from Artifact.get_entry to store a reference to another artifact’s entry. |
name |
The path within the artifact to place the contents of this reference. |
checksum |
Whether or not to checksum the resources located at the reference URI. Checksumming is strongly recommended as it enables automatic integrity validation. Disabling checksumming will speed up artifact creation but reference directories will not iterated through so the objects in the directory will not be saved to the artifact. We recommend setting checksum=False when adding reference objects, in which case a new version will only be created if the reference URI changes. |
max_objects |
The maximum number of objects to consider when adding a reference that points to directory or bucket store prefix. By default, the maximum number of objects allowed for Amazon S3, GCS, Azure, and local files is 10,000,000. Other URI schemas do not have a maximum. |
Returns |
|
The added manifest entries. |
|
Raises |
|
ArtifactFinalizedError |
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead. |
checkout
View source
checkout(
root: (str | None) = None
) -> str
Replace the specified root directory with the contents of the artifact.
Warning
This will delete all files in root
that are not included in the artifact.
Args |
|
root |
The directory to replace with this artifact’s files. |
Returns |
|
The path of the checked out contents. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
delete
View source
delete(
delete_aliases: bool = (False)
) -> None
Delete an artifact and its files.
If called on a linked artifact, such as a member of a portfolio collection, deletes only the link, not the source artifact.
Args |
|
delete_aliases |
If set to True , deletes all aliases associated with the artifact. Otherwise, raises an exception if the artifact has existing aliases. Ignored if the artifact is linked, such as if it is a member of a portfolio collection. |
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
download
View source
download(
root: (StrPath | None) = None,
allow_missing_references: bool = (False),
skip_cache: (bool | None) = None,
path_prefix: (StrPath | None) = None
) -> FilePathStr
Download the contents of the artifact to the specified root directory.
Existing files located within root
are not modified. Explicitly delete root
before you call download
if you want the contents of root
to exactly match
the artifact.
Args |
|
root |
The directory W&B stores the artifact’s files. |
allow_missing_references |
If set to True , any invalid reference paths will be ignored while downloading referenced files. |
skip_cache |
If set to True , the artifact cache will be skipped when downloading and W&B will download each file into the default root or specified download directory. |
path_prefix |
If specified, only files with a path that starts with the given prefix will be downloaded. Uses unix format (forward slashes). |
Returns |
|
The path to the downloaded contents. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
RuntimeError |
If the artifact is attempted to be downloaded in offline mode. |
file
View source
file(
root: (str | None) = None
) -> StrPath
Download a single file artifact to the directory you specify with root
.
Args |
|
root |
The root directory to store the file. Defaults to ‘./artifacts/self.name/’. |
Returns |
|
The full path of the downloaded file. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
ValueError |
If the artifact contains more than one file. |
files
View source
files(
names: (list[str] | None) = None,
per_page: int = 50
) -> ArtifactFiles
Iterate over all files stored in this artifact.
Args |
|
names |
The filename paths relative to the root of the artifact you wish to list. |
per_page |
The number of files to return per request. |
Returns |
|
An iterator containing File objects. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
finalize
View source
Finalize the artifact version.
You cannot modify an artifact version once it is finalized because the artifact
is logged as a specific artifact version. Create a new artifact version
to log more data to an artifact. An artifact is automatically finalized
when you log the artifact with log_artifact
.
get
View source
get(
name: str
) -> (WBValue | None)
Get the WBValue object located at the artifact relative name
.
Args |
|
name |
The artifact relative name to retrieve. |
Returns |
|
W&B object that can be logged with wandb.log() and visualized in the W&B UI. |
|
Raises |
|
ArtifactNotLoggedError |
if the artifact isn’t logged or the run is offline |
get_added_local_path_name
View source
get_added_local_path_name(
local_path: str
) -> (str | None)
Get the artifact relative name of a file added by a local filesystem path.
Args |
|
local_path |
The local path to resolve into an artifact relative name. |
Returns |
|
The artifact relative name. |
|
get_entry
View source
get_entry(
name: StrPath
) -> ArtifactManifestEntry
Get the entry with the given name.
Args |
|
name |
The artifact relative name to get |
Raises |
|
ArtifactNotLoggedError |
if the artifact isn’t logged or the run is offline. |
KeyError |
if the artifact doesn’t contain an entry with the given name. |
get_path
View source
get_path(
name: StrPath
) -> ArtifactManifestEntry
Deprecated. Use get_entry(name)
.
is_draft
View source
Check if artifact is not saved.
Returns: Boolean. False
if artifact is saved. True
if artifact is not saved.
json_encode
View source
json_encode() -> dict[str, Any]
Returns the artifact encoded to the JSON format.
Returns |
|
A dict with string keys representing attributes of the artifact. |
|
link
View source
link(
target_path: str,
aliases: (list[str] | None) = None
) -> None
Link this artifact to a portfolio (a promoted collection of artifacts).
Args |
|
target_path |
The path to the portfolio inside a project. The target path must adhere to one of the following schemas {portfolio} , {project}/{portfolio} or {entity}/{project}/{portfolio} . To link the artifact to the Model Registry, rather than to a generic portfolio inside a project, set target_path to the following schema {"model-registry"}/{Registered Model Name} or {entity}/{"model-registry"}/{Registered Model Name} . |
aliases |
A list of strings that uniquely identifies the artifact inside the specified portfolio. |
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
logged_by
View source
logged_by() -> (Run | None)
Get the W&B run that originally logged the artifact.
Returns |
|
The name of the W&B run that originally logged the artifact. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
new_draft
View source
Create a new draft artifact with the same content as this committed artifact.
The artifact returned can be extended or modified and logged as a new version.
Returns |
|
An Artifact object. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
new_file
View source
@contextlib.contextmanager
new_file(
name: str,
mode: str = "x",
encoding: (str | None) = None
) -> Iterator[IO]
Open a new temporary file and add it to the artifact.
Args |
|
name |
The name of the new file to add to the artifact. |
mode |
The file access mode to use to open the new file. |
encoding |
The encoding used to open the new file. |
Returns |
|
A new file object that can be written to. Upon closing, the file will be automatically added to the artifact. |
|
Raises |
|
ArtifactFinalizedError |
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead. |
remove
View source
remove(
item: (StrPath | ArtifactManifestEntry)
) -> None
Remove an item from the artifact.
Args |
|
item |
The item to remove. Can be a specific manifest entry or the name of an artifact-relative path. If the item matches a directory all items in that directory will be removed. |
Raises |
|
ArtifactFinalizedError |
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead. |
FileNotFoundError |
If the item isn’t found in the artifact. |
save
View source
save(
project: (str | None) = None,
settings: (wandb.Settings | None) = None
) -> None
Persist any changes made to the artifact.
If currently in a run, that run will log this artifact. If not currently in a
run, a run of type “auto” is created to track this artifact.
Args |
|
project |
A project to use for the artifact in the case that a run is not already in context. |
settings |
A settings object to use when initializing an automatic run. Most commonly used in testing harness. |
unlink
View source
Unlink this artifact if it is currently a member of a portfolio (a promoted collection of artifacts).
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
ValueError |
If the artifact is not linked, such as if it is not a member of a portfolio collection. |
used_by
View source
Get a list of the runs that have used this artifact.
Returns |
|
A list of Run objects. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
verify
View source
verify(
root: (str | None) = None
) -> None
Verify that the contents of an artifact match the manifest.
All files in the directory are checksummed and the checksums are then
cross-referenced against the artifact’s manifest. References are not verified.
Args |
|
root |
The directory to verify. If None artifact will be downloaded to ‘./artifacts/self.name/’ |
Raises |
|
ArtifactNotLoggedError |
If the artifact is not logged. |
ValueError |
If the verification fails. |
wait
View source
wait(
timeout: (int | None) = None
) -> Artifact
If needed, wait for this artifact to finish logging.
Args |
|
timeout |
The time, in seconds, to wait. |
Returns |
|
An Artifact object. |
|
__getitem__
View source
__getitem__(
name: str
) -> (WBValue | None)
Get the WBValue object located at the artifact relative name
.
Args |
|
name |
The artifact relative name to get. |
Returns |
|
W&B object that can be logged with wandb.log() and visualized in the W&B UI. |
|
Raises |
|
ArtifactNotLoggedError |
If the artifact isn’t logged or the run is offline. |
__setitem__
View source
__setitem__(
name: str,
item: WBValue
) -> ArtifactManifestEntry
Add item
to the artifact at path name
.
Args |
|
name |
The path within the artifact to add the object. |
item |
The object to add. |
Returns |
|
The added manifest entry |
|
Raises |
|
ArtifactFinalizedError |
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead. |
3.3 - controller
Public sweep controller constructor.
controller(
sweep_id_or_config: Optional[Union[str, Dict]] = None,
entity: Optional[str] = None,
project: Optional[str] = None
) -> "_WandbController"
Usage:
import wandb
tuner = wandb.controller(...)
print(tuner.sweep_config)
print(tuner.sweep_id)
tuner.configure_search(...)
tuner.configure_stopping(...)
3.4 - Data Types
This module defines data types for logging rich, interactive visualizations to W&B.
Data types include common media types, like images, audio, and videos,
flexible containers for information, like tables and HTML, and more.
For more on logging media, see our guide
For more on logging structured data for interactive dataset and model analysis,
see our guide to W&B Tables.
All of these special data types are subclasses of WBValue. All the data types
serialize to JSON, since that is what wandb uses to save the objects locally
and upload them to the W&B server.
Classes
class Audio
: Wandb class for audio clips.
class BoundingBoxes2D
: Format images with 2D bounding box overlays for logging to W&B.
class Graph
: Wandb class for graphs.
class Histogram
: wandb class for histograms.
class Html
: Wandb class for arbitrary html.
class Image
: Format images for logging to W&B.
class ImageMask
: Format image masks or overlays for logging to W&B.
class Molecule
: Wandb class for 3D Molecular data.
class Object3D
: Wandb class for 3D point clouds.
class Plotly
: Wandb class for plotly plots.
class Table
: The Table class used to display and analyze tabular data.
class Video
: Format a video for logging to W&B.
class WBTraceTree
: Media object for trace tree data.
3.4.1 - Audio
Wandb class for audio clips.
Audio(
data_or_path, sample_rate=None, caption=None
)
Args |
|
data_or_path |
(string or numpy array) A path to an audio file or a numpy array of audio data. |
sample_rate |
(int) Sample rate, required when passing in raw numpy array of audio data. |
caption |
(string) Caption to display with audio. |
Methods
durations
View source
@classmethod
durations(
audio_list
)
resolve_ref
View source
sample_rates
View source
@classmethod
sample_rates(
audio_list
)
3.4.2 - BoundingBoxes2D
Format images with 2D bounding box overlays for logging to W&B.
BoundingBoxes2D(
val: dict,
key: str
) -> None
Args |
|
val |
(dictionary) A dictionary of the following form: box_data: (list of dictionaries) One dictionary for each bounding box, containing: position: (dictionary) the position and size of the bounding box, in one of two formats Note that boxes need not all use the same format. {"minX", "minY", "maxX", "maxY"}: (dictionary) A set of coordinates defining the upper and lower bounds of the box (the bottom left and top right corners) {"middle", "width", "height"}: (dictionary) A set of coordinates defining the center and dimensions of the box, with "middle" as a list [x, y] for the center point and "width" and "height" as numbers domain: (string) One of two options for the bounding box coordinate domain null: By default, or if no argument is passed, the coordinate domain is assumed to be relative to the original image, expressing this box as a fraction or percentage of the original image. This means all coordinates and dimensions passed into the "position" argument are floating point numbers between 0 and 1. "pixel": (string literal) The coordinate domain is set to the pixel space. This means all coordinates and dimensions passed into "position" are integers within the bounds of the image dimensions. class_id: (integer) The class label id for this box scores: (dictionary of string to number, optional) A mapping of named fields to numerical values (float or int ), can be used for filtering boxes in the UI based on a range of values for the corresponding field box_caption: (string, optional) A string to be displayed as the label text above this box in the UI, often composed of the class label, class name, and/or scores class_labels: (dictionary, optional) A map of integer class labels to their readable class names |
key |
(string) The readable name or id for this set of bounding boxes. Examples: predictions , ground_truth |
Examples:
Log bounding boxes for a single image
import numpy as np
import wandb
wandb.init()
image = np.random.randint(low=0, high=256, size=(200, 300, 3))
class_labels = {0: "person", 1: "car", 2: "road", 3: "building"}
img = wandb.Image(
image,
boxes={
"predictions": {
"box_data": [
{
# one box expressed in the default relative/fractional domain
"position": {
"minX": 0.1,
"maxX": 0.2,
"minY": 0.3,
"maxY": 0.4,
},
"class_id": 1,
"box_caption": class_labels[1],
"scores": {"acc": 0.2, "loss": 1.2},
},
{
# another box expressed in the pixel domain
"position": {"middle": [150, 20], "width": 68, "height": 112},
"domain": "pixel",
"class_id": 3,
"box_caption": "a building",
"scores": {"acc": 0.5, "loss": 0.7},
},
# Log as many boxes an as needed
],
"class_labels": class_labels,
}
},
)
wandb.log({"driving_scene": img})
Log a bounding box overlay to a Table
import numpy as np
import wandb
wandb.init()
image = np.random.randint(low=0, high=256, size=(200, 300, 3))
class_labels = {0: "person", 1: "car", 2: "road", 3: "building"}
class_set = wandb.Classes(
[
{"name": "person", "id": 0},
{"name": "car", "id": 1},
{"name": "road", "id": 2},
{"name": "building", "id": 3},
]
)
img = wandb.Image(
image,
boxes={
"predictions": {
"box_data": [
{
# one box expressed in the default relative/fractional domain
"position": {
"minX": 0.1,
"maxX": 0.2,
"minY": 0.3,
"maxY": 0.4,
},
"class_id": 1,
"box_caption": class_labels[1],
"scores": {"acc": 0.2, "loss": 1.2},
},
{
# another box expressed in the pixel domain
"position": {"middle": [150, 20], "width": 68, "height": 112},
"domain": "pixel",
"class_id": 3,
"box_caption": "a building",
"scores": {"acc": 0.5, "loss": 0.7},
},
# Log as many boxes an as needed
],
"class_labels": class_labels,
}
},
classes=class_set,
)
table = wandb.Table(columns=["image"])
table.add_data(img)
wandb.log({"driving_scene": table})
Methods
type_name
View source
@classmethod
type_name() -> str
validate
View source
validate(
val: dict
) -> bool
3.4.3 - Graph
Wandb class for graphs.
This class is typically used for saving and displaying neural net models. It
represents the graph as an array of nodes and edges. The nodes can have
labels that can be visualized by wandb.
Examples:
Import a keras model:
Graph.from_keras(keras_model)
Methods
add_edge
View source
add_edge(
from_node, to_node
)
add_node
View source
add_node(
node=None, **node_kwargs
)
from_keras
View source
@classmethod
from_keras(
model
)
pprint
View source
__getitem__
View source
3.4.4 - Histogram
wandb class for histograms.
Histogram(
sequence: Optional[Sequence] = None,
np_histogram: Optional['NumpyHistogram'] = None,
num_bins: int = 64
) -> None
This object works just like numpy’s histogram function
https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
Examples:
Generate histogram from a sequence
wandb.Histogram([1, 2, 3])
Efficiently initialize from np.histogram.
hist = np.histogram(data)
wandb.Histogram(np_histogram=hist)
Args |
|
sequence |
(array_like) input data for histogram |
np_histogram |
(numpy histogram) alternative input of a precomputed histogram |
num_bins |
(int) Number of bins for the histogram. The default number of bins is 64. The maximum number of bins is 512 |
Attributes |
|
bins |
([float]) edges of bins |
histogram |
([int]) number of elements falling in each bin |
Class Variables |
|
MAX_LENGTH |
512 |
3.4.5 - Html
Wandb class for arbitrary html.
Html(
data: Union[str, 'TextIO'],
inject: bool = (True)
) -> None
Args |
|
data |
(string or io object) HTML to display in wandb |
inject |
(boolean) Add a stylesheet to the HTML object. If set to False the HTML will pass through unchanged. |
Methods
inject_head
View source
3.4.6 - Image
Format images for logging to W&B.
Image(
data_or_path: "ImageDataOrPathType",
mode: Optional[str] = None,
caption: Optional[str] = None,
grouping: Optional[int] = None,
classes: Optional[Union['Classes', Sequence[dict]]] = None,
boxes: Optional[Union[Dict[str, 'BoundingBoxes2D'], Dict[str, dict]]] = None,
masks: Optional[Union[Dict[str, 'ImageMask'], Dict[str, dict]]] = None,
file_type: Optional[str] = None
) -> None
Args |
|
data_or_path |
(numpy array, string, io) Accepts numpy array of image data, or a PIL image. The class attempts to infer the data format and converts it. |
mode |
(string) The PIL mode for an image. Most common are L , RGB , RGBA . Full explanation at https://pillow.readthedocs.io/en/stable/handbook/concepts.`html#modes |
caption |
(string) Label for display of image. |
Note : When logging a torch.Tensor
as a wandb.Image
, images are normalized. If you do not want to normalize your images, please convert your tensors to a PIL Image.
Examples:
Create a wandb.Image
from a numpy array
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Create a wandb.Image
from a PILImage
import numpy as np
from PIL import Image as PILImage
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
log .jpg
rather than .png
(default)
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}", file_type="jpg")
examples.append(image)
run.log({"examples": examples})
Methods
all_boxes
View source
@classmethod
all_boxes(
images: Sequence['Image'],
run: "LocalRun",
run_key: str,
step: Union[int, str]
) -> Union[List[Optional[dict]], bool]
all_captions
View source
@classmethod
all_captions(
images: Sequence['Media']
) -> Union[bool, Sequence[Optional[str]]]
all_masks
View source
@classmethod
all_masks(
images: Sequence['Image'],
run: "LocalRun",
run_key: str,
step: Union[int, str]
) -> Union[List[Optional[dict]], bool]
guess_mode
View source
guess_mode(
data: "np.ndarray"
) -> str
Guess what type of image the np.array is representing.
to_uint8
View source
@classmethod
to_uint8(
data: "np.ndarray"
) -> "np.ndarray"
Convert image data to uint8.
Convert floating point image on the range [0,1] and integer images on the range
[0,255] to uint8, clipping if necessary.
Class Variables |
|
MAX_DIMENSION |
65500 |
MAX_ITEMS |
108 |
3.4.7 - ImageMask
Format image masks or overlays for logging to W&B.
ImageMask(
val: dict,
key: str
) -> None
Args |
|
val |
(dictionary) One of these two keys to represent the image: mask_data : (2D numpy array) The mask containing an integer class label for each pixel in the image path : (string) The path to a saved image file of the mask class_labels : (dictionary of integers to strings, optional) A mapping of the integer class labels in the mask to readable class names. These will default to class_0, class_1, class_2, etc. |
key |
(string) The readable name or id for this mask type. Examples: predictions , ground_truth |
Examples:
Logging a single masked image
import numpy as np
import wandb
wandb.init()
image = np.random.randint(low=0, high=256, size=(100, 100, 3), dtype=np.uint8)
predicted_mask = np.empty((100, 100), dtype=np.uint8)
ground_truth_mask = np.empty((100, 100), dtype=np.uint8)
predicted_mask[:50, :50] = 0
predicted_mask[50:, :50] = 1
predicted_mask[:50, 50:] = 2
predicted_mask[50:, 50:] = 3
ground_truth_mask[:25, :25] = 0
ground_truth_mask[25:, :25] = 1
ground_truth_mask[:25, 25:] = 2
ground_truth_mask[25:, 25:] = 3
class_labels = {0: "person", 1: "tree", 2: "car", 3: "road"}
masked_image = wandb.Image(
image,
masks={
"predictions": {"mask_data": predicted_mask, "class_labels": class_labels},
"ground_truth": {
"mask_data": ground_truth_mask,
"class_labels": class_labels,
},
},
)
wandb.log({"img_with_masks": masked_image})
Log a masked image inside a Table
import numpy as np
import wandb
wandb.init()
image = np.random.randint(low=0, high=256, size=(100, 100, 3), dtype=np.uint8)
predicted_mask = np.empty((100, 100), dtype=np.uint8)
ground_truth_mask = np.empty((100, 100), dtype=np.uint8)
predicted_mask[:50, :50] = 0
predicted_mask[50:, :50] = 1
predicted_mask[:50, 50:] = 2
predicted_mask[50:, 50:] = 3
ground_truth_mask[:25, :25] = 0
ground_truth_mask[25:, :25] = 1
ground_truth_mask[:25, 25:] = 2
ground_truth_mask[25:, 25:] = 3
class_labels = {0: "person", 1: "tree", 2: "car", 3: "road"}
class_set = wandb.Classes(
[
{"name": "person", "id": 0},
{"name": "tree", "id": 1},
{"name": "car", "id": 2},
{"name": "road", "id": 3},
]
)
masked_image = wandb.Image(
image,
masks={
"predictions": {"mask_data": predicted_mask, "class_labels": class_labels},
"ground_truth": {
"mask_data": ground_truth_mask,
"class_labels": class_labels,
},
},
classes=class_set,
)
table = wandb.Table(columns=["image"])
table.add_data(masked_image)
wandb.log({"random_field": table})
Methods
type_name
View source
@classmethod
type_name() -> str
validate
View source
validate(
val: dict
) -> bool
3.4.8 - Molecule
Wandb class for 3D Molecular data.
Molecule(
data_or_path: Union[str, 'TextIO'],
caption: Optional[str] = None,
**kwargs
) -> None
Args |
|
data_or_path |
(string, io) Molecule can be initialized from a file name or an io object. |
caption |
(string) Caption associated with the molecule for display. |
Methods
from_rdkit
View source
@classmethod
from_rdkit(
data_or_path: "RDKitDataType",
caption: Optional[str] = None,
convert_to_3d_and_optimize: bool = (True),
mmff_optimize_molecule_max_iterations: int = 200
) -> "Molecule"
Convert RDKit-supported file/object types to wandb.Molecule
.
Args |
|
data_or_path |
(string, rdkit.Chem.rdchem.Mol) Molecule can be initialized from a file name or an rdkit.Chem.rdchem.Mol object. |
caption |
(string) Caption associated with the molecule for display. |
convert_to_3d_and_optimize |
(bool) Convert to rdkit.Chem.rdchem.Mol with 3D coordinates. This is an expensive operation that may take a long time for complicated molecules. |
mmff_optimize_molecule_max_iterations |
(int) Number of iterations to use in rdkit.Chem.AllChem.MMFFOptimizeMolecule |
from_smiles
View source
@classmethod
from_smiles(
data: str,
caption: Optional[str] = None,
sanitize: bool = (True),
convert_to_3d_and_optimize: bool = (True),
mmff_optimize_molecule_max_iterations: int = 200
) -> "Molecule"
Convert SMILES string to wandb.Molecule
.
Args |
|
data |
(string) SMILES string. |
caption |
(string) Caption associated with the molecule for display |
sanitize |
(bool) Check if the molecule is chemically reasonable by the RDKit’s definition. |
convert_to_3d_and_optimize |
(bool) Convert to rdkit.Chem.rdchem.Mol with 3D coordinates. This is an expensive operation that may take a long time for complicated molecules. |
mmff_optimize_molecule_max_iterations |
(int) Number of iterations to use in rdkit.Chem.AllChem.MMFFOptimizeMolecule |
Class Variables |
|
SUPPORTED_RDKIT_TYPES |
|
SUPPORTED_TYPES |
|
3.4.9 - Object3D
Wandb class for 3D point clouds.
Object3D(
data_or_path: Union['np.ndarray', str, 'TextIO', dict],
**kwargs
) -> None
Args |
|
data_or_path |
(numpy array, string, io) Object3D can be initialized from a file or a numpy array. You can pass a path to a file or an io object and a file_type which must be one of SUPPORTED_TYPES |
The shape of the numpy array must be one of either:
[[x y z], ...] nx3
[[x y z c], ...] nx4 where c is a category with supported range [1, 14]
[[x y z r g b], ...] nx6 where is rgb is color
Methods
from_file
View source
@classmethod
from_file(
data_or_path: Union['TextIO', str],
file_type: Optional['FileFormat3D'] = None
) -> "Object3D"
Initializes Object3D from a file or stream.
Args |
|
data_or_path (Union["TextIO", str]) : A path to a file or a TextIO stream. file_type (str) : Specifies the data format passed to data_or_path . Required when data_or_path is a TextIO stream. This parameter is ignored if a file path is provided. The type is taken from the file extension. |
|
from_numpy
View source
@classmethod
from_numpy(
data: "np.ndarray"
) -> "Object3D"
Initializes Object3D from a numpy array.
Args |
|
data (numpy array): Each entry in the array will represent one point in the point cloud. |
|
The shape of the numpy array must be one of either:
[[x y z], ...] # nx3.
[[x y z c], ...] # nx4 where c is a category with supported range [1, 14].
[[x y z r g b], ...] # nx6 where is rgb is color.
from_point_cloud
View source
@classmethod
from_point_cloud(
points: Sequence['Point'],
boxes: Sequence['Box3D'],
vectors: Optional[Sequence['Vector3D']] = None,
point_cloud_type: "PointCloudType" = "lidar/beta"
) -> "Object3D"
Initializes Object3D from a python object.
Args |
|
points (Sequence["Point"]) : The points in the point cloud. boxes (Sequence["Box3D"]) : 3D bounding boxes for labeling the point cloud. Boxes are displayed in point cloud visualizations. vectors (Optional[Sequence["Vector3D"]]) : Each vector is displayed in the point cloud visualization. Can be used to indicate directionality of bounding boxes. Defaults to None . point_cloud_type ("lidar/beta") : At this time, only the "lidar/beta" type is supported. Defaults to "lidar/beta" . |
|
Class Variables |
|
SUPPORTED_POINT_CLOUD_TYPES |
|
SUPPORTED_TYPES |
|
3.4.10 - Plotly
Wandb class for plotly plots.
Plotly(
val: Union['plotly.Figure', 'matplotlib.artist.Artist']
)
Args |
|
val |
matplotlib or plotly figure |
Methods
View source
@classmethod
make_plot_media(
val: Union['plotly.Figure', 'matplotlib.artist.Artist']
) -> Union[Image, 'Plotly']
3.4.11 - Table
The Table class used to display and analyze tabular data.
Table(
columns=None, data=None, rows=None, dataframe=None, dtype=None, optional=(True),
allow_mixed_types=(False)
)
Unlike traditional spreadsheets, Tables support numerous types of data:
scalar values, strings, numpy arrays, and most subclasses of wandb.data_types.Media
.
This means you can embed Images
, Video
, Audio
, and other sorts of rich, annotated media
directly in Tables, alongside other traditional scalar values.
This class is the primary class used to generate the Table Visualizer
in the UI: https://docs.wandb.ai/guides/data-vis/tables.
Args |
|
columns |
(List[str]) Names of the columns in the table. Defaults to ["Input", "Output", "Expected"] . |
data |
(List[List[any]]) 2D row-oriented array of values. |
dataframe |
(pandas.DataFrame) DataFrame object used to create the table. When set, data and columns arguments are ignored. |
optional |
(Union[bool,List[bool]]) Determines if None values are allowed. Default to True - If a singular bool value, then the optionality is enforced for all columns specified at construction time - If a list of bool values, then the optionality is applied to each column - should be the same length as columns applies to all columns. A list of bool values applies to each respective column. |
allow_mixed_types |
(bool) Determines if columns are allowed to have mixed types (disables type validation). Defaults to False |
Methods
add_column
View source
add_column(
name, data, optional=(False)
)
Adds a column of data to the table.
Args |
|
name |
(str) - the unique name of the column |
data |
(list |
optional |
(bool) - if null-like values are permitted |
add_computed_columns
View source
add_computed_columns(
fn
)
Adds one or more computed columns based on existing data.
Args |
|
fn |
A function which accepts one or two parameters, ndx (int) and row (dict), which is expected to return a dict representing new columns for that row, keyed by the new column names. ndx is an integer representing the index of the row. Only included if include_ndx is set to True . row is a dictionary keyed by existing columns |
add_data
View source
Adds a new row of data to the table. The maximum amount of rows in a table is determined by wandb.Table.MAX_ARTIFACT_ROWS
.
The length of the data should match the length of the table column.
add_row
View source
Deprecated. Use add_data
instead.
cast
View source
cast(
col_name, dtype, optional=(False)
)
Casts a column to a specific data type.
This can be one of the normal python classes, an internal W&B type, or an
example object, like an instance of wandb.Imag
e or wandb.Classes
.
Args |
|
col_name |
(str) - The name of the column to cast. |
dtype |
(class, wandb.wandb_sdk.interface._dtypes.Type, any) - The target dtype. |
optional |
(bool) - If the column should allow Nones. |
get_column
View source
get_column(
name, convert_to=None
)
Retrieves a column from the table and optionally converts it to a NumPy object.
Args |
|
name |
(str) - the name of the column |
convert_to |
(str, optional) - “numpy”: will convert the underlying data to numpy object |
get_dataframe
View source
Returns a pandas.DataFrame
of the table.
get_index
View source
Returns an array of row indexes for use in other tables to create links.
index_ref
View source
Gets a reference of the index of a row in the table.
iterrows
View source
Returns the table data by row, showing the index of the row and the relevant data.
index : int
The index of the row. Using this value in other W&B tables
will automatically build a relationship between the tables
row : List[any]
The data of the row.
set_fk
View source
set_fk(
col_name, table, table_col
)
set_pk
View source
Class Variables |
|
MAX_ARTIFACT_ROWS |
200000 |
MAX_ROWS |
10000 |
3.4.12 - Video
Format a video for logging to W&B.
Video(
data_or_path: Union['np.ndarray', str, 'TextIO', 'BytesIO'],
caption: Optional[str] = None,
fps: Optional[int] = None,
format: Optional[str] = None
)
Args |
|
data_or_path |
(numpy array, string, io) Video can be initialized with a path to a file or an io object. The format must be gif , mp4 , webm or ogg . The format must be specified with the format argument. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. Channels should be (time, channel, height, width) or (batch, time, channel, height width) |
caption |
(string) caption associated with the video for display |
fps |
(int) The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes. |
format |
(string) format of video, necessary if initializing with path or io object. |
Examples:
Log a numpy array as a video
import numpy as np
import wandb
wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8)
wandb.log({"video": wandb.Video(frames, fps=4)})
Methods
encode
View source
encode(
fps: int = 4
) -> None
3.4.13 - WBTraceTree
Media object for trace tree data.
WBTraceTree(
root_span: Span,
model_dict: typing.Optional[dict] = None
)
Args |
|
root_span (Span ): The root span of the trace tree. |
|
model_dict (dict , optional): A dictionary containing the model dump. model_dict is a completely user-defined dict . The UI will render a JSON viewer for this dict , giving special treatment to dictionaries with a _kind key. This is because model vendors have such different serialization formats that we need to be flexible here. |
|
3.5 - finish
Finish a run and upload any remaining data.
finish(
exit_code: (int | None) = None,
quiet: (bool | None) = None
) -> None
Marks the completion of a W&B run and ensures all data is synced to the server.
The run’s final state is determined by its exit conditions and sync status.
Run States:
- Running: Active run that is logging data and/or sending heartbeats.
- Crashed: Run that stopped sending heartbeats unexpectedly.
- Finished: Run completed successfully (
exit_code=0
) with all data synced.
- Failed: Run completed with errors (
exit_code!=0
).
Args |
|
exit_code |
Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed. |
quiet |
Deprecated. Configure logging verbosity using wandb.Settings(quiet=...) . |
3.6 - Import & Export API
Classes
class Api
: Used for querying the wandb server.
class File
: File is a class associated with a file saved by wandb.
class Files
: An iterable collection of File
objects.
class Job
class Project
: A project is a namespace for runs.
class Projects
: An iterable collection of Project
objects.
class QueuedRun
: A single queued run associated with an entity and project. Call run = queued_run.wait_until_running()
or run = queued_run.wait_until_finished()
to access the run.
class Run
: A single run associated with an entity and project.
class RunQueue
class Runs
: An iterable collection of runs associated with a project and optional filter.
class Sweep
: A set of runs associated with a sweep.
3.6.1 - Api
Used for querying the wandb server.
Api(
overrides: Optional[Dict[str, Any]] = None,
timeout: Optional[int] = None,
api_key: Optional[str] = None
) -> None
Examples:
Most common way to initialize
>>> wandb.Api()
Args |
|
overrides |
(dict) You can set base_url if you are using a wandb server other than https://api.wandb.ai. You can also set defaults for entity , project , and run . |
Methods
artifact
View source
artifact(
name: str,
type: Optional[str] = None
)
Return a single artifact by parsing path in the form project/name
or entity/project/name
.
Args |
|
name |
(str) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms: name:version name:alias |
type |
(str, optional) The type of artifact to fetch. |
Returns |
|
An Artifact object. |
|
Raises |
|
ValueError |
If the artifact name is not specified. |
ValueError |
If the artifact type is specified but does not match the type of the fetched artifact. |
Note:
This method is intended for external use only. Do not call api.artifact()
within the wandb repository code.
artifact_collection
View source
artifact_collection(
type_name: str,
name: str
) -> "public.ArtifactCollection"
Return a single artifact collection by type and parsing path in the form entity/project/name
.
Args |
|
type_name |
(str) The type of artifact collection to fetch. |
name |
(str) An artifact collection name. May be prefixed with entity/project. |
Returns |
|
An ArtifactCollection object. |
|
artifact_collection_exists
View source
artifact_collection_exists(
name: str,
type: str
) -> bool
Return whether an artifact collection exists within a specified project and entity.
Args |
|
name |
(str) An artifact collection name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to uncategorized . |
type |
(str) The type of artifact collection |
Returns |
|
True if the artifact collection exists, False otherwise. |
|
artifact_collections
View source
artifact_collections(
project_name: str,
type_name: str,
per_page: Optional[int] = 50
) -> "public.ArtifactCollections"
Return a collection of matching artifact collections.
Args |
|
project_name |
(str) The name of the project to filter on. |
type_name |
(str) The name of the artifact type to filter on. |
per_page |
(int, optional) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this. |
Returns |
|
An iterable ArtifactCollections object. |
|
artifact_exists
View source
artifact_exists(
name: str,
type: Optional[str] = None
) -> bool
Return whether an artifact version exists within a specified project and entity.
Args |
|
name |
(str) An artifact name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to uncategorized . Valid names can be in the following forms: name:version name:alias . |
type |
(str, optional) The type of artifact. |
Returns |
|
True if the artifact version exists, False otherwise. |
|
artifact_type
View source
artifact_type(
type_name: str,
project: Optional[str] = None
) -> "public.ArtifactType"
Return the matching ArtifactType
.
Args |
|
type_name |
(str) The name of the artifact type to retrieve. |
project |
(str, optional) If given, a project name or path to filter on. |
Returns |
|
An ArtifactType object. |
|
artifact_types
View source
artifact_types(
project: Optional[str] = None
) -> "public.ArtifactTypes"
Return a collection of matching artifact types.
Args |
|
project |
(str, optional) If given, a project name or path to filter on. |
Returns |
|
An iterable ArtifactTypes object. |
|
artifact_versions
View source
artifact_versions(
type_name, name, per_page=50
)
Deprecated, use artifacts(type_name, name)
instead.
artifacts
View source
artifacts(
type_name: str,
name: str,
per_page: Optional[int] = 50,
tags: Optional[List[str]] = None
) -> "public.Artifacts"
Return an Artifacts
collection from the given parameters.
Args |
|
type_name |
(str) The type of artifacts to fetch. |
name |
(str) An artifact collection name. May be prefixed with entity/project. |
per_page |
(int, optional) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this. |
tags |
(list[str], optional) Only return artifacts with all of these tags. |
Returns |
|
An iterable Artifacts object. |
|
create_project
View source
create_project(
name: str,
entity: str
) -> None
Create a new project.
Args |
|
name |
(str) The name of the new project. |
entity |
(str) The entity of the new project. |
create_run
View source
create_run(
*,
run_id: Optional[str] = None,
project: Optional[str] = None,
entity: Optional[str] = None
) -> "public.Run"
Create a new run.
Args |
|
run_id |
(str, optional) The ID to assign to the run, if given. The run ID is automatically generated by default, so in general, you do not need to specify this and should only do so at your own risk. |
project |
(str, optional) If given, the project of the new run. |
entity |
(str, optional) If given, the entity of the new run. |
Returns |
|
The newly created Run . |
|
create_run_queue
View source
create_run_queue(
name: str,
type: "public.RunQueueResourceType",
entity: Optional[str] = None,
prioritization_mode: Optional['public.RunQueuePrioritizationMode'] = None,
config: Optional[dict] = None,
template_variables: Optional[dict] = None
) -> "public.RunQueue"
Create a new run queue (launch).
Args |
|
name |
(str) Name of the queue to create. |
type |
(str) Type of resource to be used for the queue. One of local-container , local-process , kubernetes , sagemaker , or gcp-vertex . |
entity |
(str) Optional name of the entity to create the queue. If None, will use the configured or default entity. |
prioritization_mode |
(str) Optional version of prioritization to use. Either V0 or None . |
config |
(dict) Optional default resource configuration to be used for the queue. Use handlebars (eg. {{var}} ) to specify template variables. |
template_variables |
(dict) A dictionary of template variable schemas to be used with the config. Expected format of: { "var-name": { "schema": { "type": ("string", "number", or "integer"), "default": (optional value), "minimum": (optional minimum), "maximum": (optional maximum), "enum": [..."(options)"] } } } . |
Returns |
|
The newly created RunQueue |
|
Raises |
|
ValueError if any of the parameters are invalid wandb.Error on wandb API errors |
|
create_team
View source
create_team(
team, admin_username=None
)
Create a new team.
Args |
|
team |
(str) The name of the team |
admin_username |
(str) optional username of the admin user of the team, defaults to the current user. |
create_user
View source
create_user(
email, admin=(False)
)
Create a new user.
Args |
|
email |
(str) The email address of the user |
admin |
(bool) Whether this user should be a global instance admin |
flush
View source
Flush the local cache.
The api object keeps a local cache of runs, so if the state of the run may
change while executing your script you must clear the local cache with
api.flush()
to get the latest values associated with the run.
from_path
View source
Return a run, sweep, project, or report from a path.
Examples:
project = api.from_path("my_project")
team_project = api.from_path("my_team/my_project")
run = api.from_path("my_team/my_project/runs/id")
sweep = api.from_path("my_team/my_project/sweeps/id")
report = api.from_path("my_team/my_project/reports/My-Report-Vm11dsdf")
Args |
|
path |
(str) The path to the project, run, sweep or report |
Returns |
|
A Project , Run , Sweep , or BetaReport instance. |
|
Raises |
|
wandb.Error if path is invalid or the object doesn’t exist |
|
job
View source
job(
name: Optional[str],
path: Optional[str] = None
) -> "public.Job"
Return a Job
from the given parameters.
Args |
|
name |
(str) The job name. |
path |
(str, optional) If given, the root path in which to download the job artifact. |
list_jobs
View source
list_jobs(
entity: str,
project: str
) -> List[Dict[str, Any]]
Return a list of jobs, if any, for the given entity and project.
Args |
|
entity |
(str) The entity for the listed jobs. |
project |
(str) The project for the listed jobs. |
Returns |
|
A list of matching jobs. |
|
project
View source
project(
name: str,
entity: Optional[str] = None
) -> "public.Project"
Return the Project
with the given name (and entity, if given).
Args |
|
name |
(str) The project name. |
entity |
(str) Name of the entity requested. If None, will fall back to the default entity passed to Api . If no default entity, will raise a ValueError . |
Returns |
|
A Project object. |
|
projects
View source
projects(
entity: Optional[str] = None,
per_page: Optional[int] = 200
) -> "public.Projects"
Get projects for a given entity.
Args |
|
entity |
(str) Name of the entity requested. If None, will fall back to the default entity passed to Api . If no default entity, will raise a ValueError . |
per_page |
(int) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this. |
Returns |
|
A Projects object which is an iterable collection of Project objects. |
|
queued_run
View source
queued_run(
entity, project, queue_name, run_queue_item_id, project_queue=None,
priority=None
)
Return a single queued run based on the path.
Parses paths of the form entity/project/queue_id/run_queue_item_id.
reports
View source
reports(
path: str = "",
name: Optional[str] = None,
per_page: Optional[int] = 50
) -> "public.Reports"
Get reports for a given project path.
WARNING: This api is in beta and will likely change in a future release
Args |
|
path |
(str) path to project the report resides in, should be in the form: “entity/project” |
name |
(str, optional) optional name of the report requested. |
per_page |
(int) Sets the page size for query pagination. None will use the default size. Usually there is no reason to change this. |
Returns |
|
A Reports object which is an iterable collection of BetaReport objects. |
|
run
View source
Return a single run by parsing path in the form entity/project/run_id.
Args |
|
path |
(str) path to run in the form entity/project/run_id . If api.entity is set, this can be in the form project/run_id and if api.project is set this can just be the run_id. |
run_queue
View source
run_queue(
entity, name
)
Return the named RunQueue
for entity.
To create a new RunQueue
, use wandb.Api().create_run_queue(...)
.
runs
View source
runs(
path: Optional[str] = None,
filters: Optional[Dict[str, Any]] = None,
order: str = "+created_at",
per_page: int = 50,
include_sweeps: bool = (True)
)
Return a set of runs from a project that match the filters provided.
You can filter by config.*
, summary_metrics.*
, tags
, state
, entity
, createdAt
, etc.
Examples:
Find runs in my_project where config.experiment_name has been set to “foo”
api.runs(path="my_entity/my_project", filters={"config.experiment_name": "foo"})
Find runs in my_project where config.experiment_name has been set to “foo” or “bar”
api.runs(
path="my_entity/my_project",
filters={
"$or": [
{"config.experiment_name": "foo"},
{"config.experiment_name": "bar"},
]
},
)
Find runs in my_project where config.experiment_name matches a regex (anchors are not supported)
api.runs(
path="my_entity/my_project",
filters={"config.experiment_name": {"$regex": "b.*"}},
)
Find runs in my_project where the run name matches a regex (anchors are not supported)
api.runs(
path="my_entity/my_project", filters={"display_name": {"$regex": "^foo.*"}}
)
Find runs in my_project sorted by ascending loss
api.runs(path="my_entity/my_project", order="+summary_metrics.loss")
Args |
|
path |
(str) path to project, should be in the form: “entity/project” |
filters |
(dict) queries for specific runs using the MongoDB query language. You can filter by run properties such as config.key, summary_metrics.key, state, entity, createdAt, etc. For example: {"config.experiment_name": "foo"} would find runs with a config entry of experiment name set to “foo” You can compose operations to make more complicated queries, see Reference for the language is at https://docs.mongodb.com/manual/reference/operator/query |
order |
(str) Order can be created_at , heartbeat_at , config.*.value , or summary_metrics.* . If you prepend order with a + order is ascending. If you prepend order with a - order is descending (default). The default order is run.created_at from oldest to newest. |
per_page |
(int) Sets the page size for query pagination. |
include_sweeps |
(bool) Whether to include the sweep runs in the results. |
Returns |
|
A Runs object, which is an iterable collection of Run objects. |
|
sweep
View source
Return a sweep by parsing path in the form entity/project/sweep_id
.
Args |
|
path |
(str, optional) path to sweep in the form entity/project/sweep_id. If api.entity is set, this can be in the form project/sweep_id and if api.project is set this can just be the sweep_id. |
sync_tensorboard
View source
sync_tensorboard(
root_dir, run_id=None, project=None, entity=None
)
Sync a local directory containing tfevent files to wandb.
team
View source
team(
team: str
) -> "public.Team"
Return the matching Team
with the given name.
Args |
|
team |
(str) The name of the team. |
upsert_run_queue
View source
upsert_run_queue(
name: str,
resource_config: dict,
resource_type: "public.RunQueueResourceType",
entity: Optional[str] = None,
template_variables: Optional[dict] = None,
external_links: Optional[dict] = None,
prioritization_mode: Optional['public.RunQueuePrioritizationMode'] = None
)
Upsert a run queue (launch).
Args |
|
name |
(str) Name of the queue to create. |
entity |
(str) Optional name of the entity to create the queue. If None, will use the configured or default entity. |
resource_config |
(dict) Optional default resource configuration to be used for the queue. Use handlebars (eg. {{var}} ) to specify template variables. |
resource_type |
(str) Type of resource to be used for the queue. One of local-container , local-process , kubernetes , sagemaker , or gcp-vertex . |
template_variables |
(dict) A dictionary of template variable schemas to be used with the config. Expected format of: { "var-name": { "schema": { "type": ("string", "number", or "integer"), "default": (optional value), "minimum": (optional minimum), "maximum": (optional maximum), "enum": [..."(options)"] } } } . |
external_links |
(dict) Optional dictionary of external links to be used with the queue. Expected format of: { "name": "url" } . |
prioritization_mode |
(str) Optional version of prioritization to use. Either V0 or None . |
Returns |
|
The upserted RunQueue . |
|
Raises |
|
ValueError if any of the parameters are invalid wandb.Error on wandb API errors |
|
user
View source
user(
username_or_email: str
) -> Optional['public.User']
Return a user from a username or email address.
Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer
.
Args |
|
username_or_email |
(str) The username or email address of the user |
Returns |
|
A User object or None if a user couldn’t be found |
|
users
View source
users(
username_or_email: str
) -> List['public.User']
Return all users from a partial username or email address query.
Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer
.
Args |
|
username_or_email |
(str) The prefix or suffix of the user you want to find |
Returns |
|
An array of User objects |
|
Class Variables |
|
CREATE_PROJECT |
|
DEFAULT_ENTITY_QUERY |
|
USERS_QUERY |
|
VIEWER_QUERY |
|
3.6.2 - File
File is a class associated with a file saved by wandb.
File(
client, attrs, run=None
)
Attributes |
|
path_uri |
Returns the uri path to the file in the storage bucket. |
Methods
delete
View source
display
View source
display(
height=420, hidden=(False)
) -> bool
Display this object in jupyter.
download
View source
download(
root: str = ".",
replace: bool = (False),
exist_ok: bool = (False),
api: Optional[Api] = None
) -> io.TextIOWrapper
Downloads a file previously saved by a run from the wandb server.
Args |
|
replace (boolean): If True , download will overwrite a local file if it exists. Defaults to False . root (str): Local directory to save the file. Defaults to . . |
|
exist_ok (boolean): If True , will not raise ValueError if file already exists and will not re-download unless replace=True. Defaults to False . api (Api, optional): If given, the Api instance used to download the file. |
|
Raises |
|
ValueError if file already exists, replace=False and exist_ok=False. |
|
snake_to_camel
View source
to_html
View source
to_html(
*args, **kwargs
)
3.6.3 - Files
An iterable collection of File
objects.
Files(
client, run, names=None, per_page=50, upload=(False)
)
Methods
convert_objects
View source
next
View source
update_variables
View source
__getitem__
View source
__iter__
View source
__len__
View source
3.6.4 - Job
Job(
api: "Api",
name,
path: Optional[str] = None
) -> None
Methods
call
View source
call(
config, project=None, entity=None, queue=None, resource="local-container",
resource_args=None, template_variables=None, project_queue=None, priority=None
)
set_entrypoint
View source
set_entrypoint(
entrypoint: List[str]
)
3.6.5 - Project
A project is a namespace for runs.
Project(
client, entity, project, attrs
)
Methods
artifacts_types
View source
artifacts_types(
per_page=50
)
display
View source
display(
height=420, hidden=(False)
) -> bool
Display this object in jupyter.
snake_to_camel
View source
sweeps
View source
to_html
View source
to_html(
height=420, hidden=(False)
)
Generate HTML containing an iframe displaying this project.
3.6.6 - Projects
An iterable collection of Project
objects.
Projects(
client, entity, per_page=50
)
Methods
convert_objects
View source
next
View source
update_variables
View source
__getitem__
View source
__iter__
View source
__len__
View source
3.6.7 - QueuedRun
A single queued run associated with an entity and project. Call run = queued_run.wait_until_running()
or run = queued_run.wait_until_finished()
to access the run.
QueuedRun(
client, entity, project, queue_name, run_queue_item_id,
project_queue=LAUNCH_DEFAULT_PROJECT, priority=None
)
Methods
delete
View source
delete(
delete_artifacts=(False)
)
Delete the given queued run from the wandb backend.
wait_until_finished
View source
wait_until_running
View source
3.6.8 - Run
A single run associated with an entity and project.
Run(
client: "RetryingClient",
entity: str,
project: str,
run_id: str,
attrs: Optional[Mapping] = None,
include_sweeps: bool = (True)
)
Methods
create
View source
@classmethod
create(
api, run_id=None, project=None, entity=None
)
Create a run for the given project.
delete
View source
delete(
delete_artifacts=(False)
)
Delete the given run from the wandb backend.
display
View source
display(
height=420, hidden=(False)
) -> bool
Display this object in jupyter.
file
View source
Return the path of a file with a given name in the artifact.
Args |
|
name (str): name of requested file. |
|
Returns |
|
A File matching the name argument. |
|
files
View source
files(
names=None, per_page=50
)
Return a file path for each file named.
Args |
|
names (list): names of the requested files, if empty returns all files per_page (int): number of results per page. |
|
Returns |
|
A Files object, which is an iterator over File objects. |
|
history
View source
history(
samples=500, keys=None, x_axis="_step", pandas=(True), stream="default"
)
Return sampled history metrics for a run.
This is simpler and faster if you are ok with the history records being sampled.
Args |
|
samples |
(int, optional) The number of samples to return |
pandas |
(bool, optional) Return a pandas dataframe |
keys |
(list, optional) Only return metrics for specific keys |
x_axis |
(str, optional) Use this metric as the xAxis defaults to _step |
stream |
(str, optional) “default” for metrics, “system” for machine metrics |
Returns |
|
pandas.DataFrame |
If pandas=True returns a pandas.DataFrame of history metrics. list of dicts: If pandas=False returns a list of dicts of history metrics. |
load
View source
log_artifact
View source
log_artifact(
artifact: "wandb.Artifact",
aliases: Optional[Collection[str]] = None,
tags: Optional[Collection[str]] = None
)
Declare an artifact as output of a run.
Args |
|
artifact (Artifact ): An artifact returned from wandb.Api().artifact(name) . aliases (list, optional): Aliases to apply to this artifact. |
|
tags |
(list, optional) Tags to apply to this artifact, if any. |
Returns |
|
A Artifact object. |
|
logged_artifacts
View source
logged_artifacts(
per_page: int = 100
) -> public.RunArtifacts
Fetches all artifacts logged by this run.
Retrieves all output artifacts that were logged during the run. Returns a
paginated result that can be iterated over or collected into a single list.
Args |
|
per_page |
Number of artifacts to fetch per API request. |
Returns |
|
An iterable collection of all Artifact objects logged as outputs during this run. |
|
Example:
>>> import wandb
>>> import tempfile
>>> with tempfile.NamedTemporaryFile(
... mode="w", delete=False, suffix=".txt"
... ) as tmp:
... tmp.write("This is a test artifact")
... tmp_path = tmp.name
>>> run = wandb.init(project="artifact-example")
>>> artifact = wandb.Artifact("test_artifact", type="dataset")
>>> artifact.add_file(tmp_path)
>>> run.log_artifact(artifact)
>>> run.finish()
>>> api = wandb.Api()
>>> finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
>>> for logged_artifact in finished_run.logged_artifacts():
... print(logged_artifact.name)
test_artifact
save
View source
scan_history
View source
scan_history(
keys=None, page_size=1000, min_step=None, max_step=None
)
Returns an iterable collection of all history records for a run.
Example:
Export all the loss values for an example run
run = api.run("l2k2/examples-numpy-boston/i0wt6xua")
history = run.scan_history(keys=["Loss"])
losses = [row["Loss"] for row in history]
Args |
|
keys ([str], optional): only fetch these keys, and only fetch rows that have all of keys defined. page_size (int, optional): size of pages to fetch from the api. min_step (int, optional): the minimum number of pages to scan at a time. max_step (int, optional): the maximum number of pages to scan at a time. |
|
Returns |
|
An iterable collection over history records (dict). |
|
snake_to_camel
View source
to_html
View source
to_html(
height=420, hidden=(False)
)
Generate HTML containing an iframe displaying this run.
update
View source
Persist changes to the run object to the wandb backend.
upload_file
View source
upload_file(
path, root="."
)
Upload a file.
Args |
|
path (str): name of file to upload. root (str): the root path to save the file relative to. For example, from within my_dir , to save the run to my_dir/file.txt , set root to ../ . |
|
Returns |
|
A File matching the name argument. |
|
use_artifact
View source
use_artifact(
artifact, use_as=None
)
Declare an artifact as an input to a run.
Args |
|
artifact (Artifact ): An artifact returned from wandb.Api().artifact(name) use_as (string, optional): A string identifying how the artifact is used in the script. Used to easily differentiate artifacts used in a run, when using the beta wandb launch feature’s artifact swapping functionality. |
|
Returns |
|
A Artifact object. |
|
used_artifacts
View source
used_artifacts(
per_page: int = 100
) -> public.RunArtifacts
Fetches artifacts explicitly used by this run.
Retrieves only the input artifacts that were explicitly declared as used
during the run, typically via run.use_artifact()
. Returns a paginated
result that can be iterated over or collected into a single list.
Args |
|
per_page |
Number of artifacts to fetch per API request. |
Returns |
|
An iterable collection of Artifact objects explicitly used as inputs in this run. |
|
Example:
>>> import wandb
>>> run = wandb.init(project="artifact-example")
>>> run.use_artifact("test_artifact:latest")
>>> run.finish()
>>> api = wandb.Api()
>>> finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
>>> for used_artifact in finished_run.used_artifacts():
... print(used_artifact.name)
test_artifact
wait_until_finished
View source
3.6.9 - RunQueue
RunQueue(
client: "RetryingClient",
name: str,
entity: str,
prioritization_mode: Optional[RunQueuePrioritizationMode] = None,
_access: Optional[RunQueueAccessType] = None,
_default_resource_config_id: Optional[int] = None,
_default_resource_config: Optional[dict] = None
) -> None
Attributes |
|
items |
Up to the first 100 queued runs. Modifying this list will not modify the queue or any enqueued items. |
Methods
create
View source
@classmethod
create(
name: str,
resource: "RunQueueResourceType",
entity: Optional[str] = None,
prioritization_mode: Optional['RunQueuePrioritizationMode'] = None,
config: Optional[dict] = None,
template_variables: Optional[dict] = None
) -> "RunQueue"
delete
View source
Delete the run queue from the wandb backend.
3.6.10 - Runs
An iterable collection of runs associated with a project and optional filter.
Runs(
client: "RetryingClient",
entity: str,
project: str,
filters: Optional[Dict[str, Any]] = None,
order: Optional[str] = None,
per_page: int = 50,
include_sweeps: bool = (True)
)
This is generally used indirectly via the Api
.runs method.
Methods
convert_objects
View source
histories
View source
histories(
samples: int = 500,
keys: Optional[List[str]] = None,
x_axis: str = "_step",
format: Literal['default', 'pandas', 'polars'] = "default",
stream: Literal['default', 'system'] = "default"
)
Return sampled history metrics for all runs that fit the filters conditions.
Args |
|
samples |
(int, optional) The number of samples to return per run |
keys |
(list[str], optional) Only return metrics for specific keys |
x_axis |
(str, optional) Use this metric as the xAxis defaults to _step |
format |
(Literal, optional) Format to return data in, options are default , pandas , polars |
stream |
(Literal, optional) default for metrics, system for machine metrics |
Returns |
|
pandas.DataFrame |
If format="pandas" , returns a pandas.DataFrame of history metrics. |
polars.DataFrame |
If format="polars" returns a polars.DataFrame of history metrics. list of dicts: If format="default" , returns a list of dicts containing history metrics with a run_id key. |
next
View source
update_variables
View source
__getitem__
View source
__iter__
View source
__len__
View source
3.6.11 - Sweep
A set of runs associated with a sweep.
Sweep(
client, entity, project, sweep_id, attrs=None
)
Examples:
Instantiate with:
api = wandb.Api()
sweep = api.sweep(path / to / sweep)
Attributes |
|
runs |
(Runs ) list of runs |
id |
(str) sweep id |
project |
(str) name of project |
config |
(str) dictionary of sweep configuration |
state |
(str) the state of the sweep |
expected_run_count |
(int) number of expected runs for the sweep |
Methods
best_run
View source
Return the best run sorted by the metric defined in config or the order passed in.
display
View source
display(
height=420, hidden=(False)
) -> bool
Display this object in jupyter.
get
View source
@classmethod
get(
client, entity=None, project=None, sid=None, order=None, query=None, **kwargs
)
Execute a query against the cloud backend.
load
View source
load(
force: bool = (False)
)
snake_to_camel
View source
to_html
View source
to_html(
height=420, hidden=(False)
)
Generate HTML containing an iframe displaying this sweep.
Class Variables |
|
LEGACY_QUERY |
|
QUERY |
|
3.7 - init
Start a new run to track and log to W&B.
init(
entity: (str | None) = None,
project: (str | None) = None,
dir: (StrPath | None) = None,
id: (str | None) = None,
name: (str | None) = None,
notes: (str | None) = None,
tags: (Sequence[str] | None) = None,
config: (dict[str, Any] | str | None) = None,
config_exclude_keys: (list[str] | None) = None,
config_include_keys: (list[str] | None) = None,
allow_val_change: (bool | None) = None,
group: (str | None) = None,
job_type: (str | None) = None,
mode: (Literal['online', 'offline', 'disabled'] | None) = None,
force: (bool | None) = None,
anonymous: (Literal['never', 'allow', 'must'] | None) = None,
reinit: (bool | None) = None,
resume: (bool | Literal['allow', 'never', 'must', 'auto'] | None) = None,
resume_from: (str | None) = None,
fork_from: (str | None) = None,
save_code: (bool | None) = None,
tensorboard: (bool | None) = None,
sync_tensorboard: (bool | None) = None,
monitor_gym: (bool | None) = None,
settings: (Settings | dict[str, Any] | None) = None
) -> Run
In an ML training pipeline, you could add wandb.init()
to the beginning of
your training script as well as your evaluation script, and each piece would
be tracked as a run in W&B.
wandb.init()
spawns a new background process to log data to a run, and it
also syncs data to https://wandb.ai by default, so you can see your results
in real-time.
Call wandb.init()
to start a run before logging data with wandb.log()
.
When you’re done logging data, call wandb.finish()
to end the run. If you
don’t call wandb.finish()
, the run ends when your script exits.
For more on using wandb.init()
, including detailed examples, check out our
guide and FAQs.
Examples:
Explicitly set the entity and project and choose a name for the run:
import wandb
run = wandb.init(
entity="geoff",
project="capsules",
name="experiment-2021-10-31",
)
# ... your training code here ...
run.finish()
import wandb
config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
run.config.update({"architecture": "resnet", "depth": 34})
# ... your training code here ...
Note that you can use wandb.init()
as a context manager to automatically
call wandb.finish()
at the end of the block.
Args |
|
entity |
The username or team name under which the runs will be logged. The entity must already exist, so ensure you’ve created your account or team in the UI before starting to log runs. If not specified, the run will default to your default entity. To change the default entity, go to your settings and update the Default location to create new projects under Default team. |
project |
The name of the project under which this run will logs. If not specified, we use a heuristic to infer the project name based on the system, such as checking the git root or the current program file. If we can’t infer the project name, the project will default to "uncategorized" . |
dir |
An absolute path to the directory where metadata and downloaded files will be stored. When calling download() on an artifact, files will be saved to this directory. If not specified, this defaults to the ./wandb directory. |
id |
A unique identifier for this run, used for resuming. It must be unique within the project and cannot be reused once a run is deleted. The identifier must not contain any of the following special characters: / \ # ? % : . For a short descriptive name, use the name field, or for saving hyperparameters to compare across runs, use config . |
name |
A short display name for this run, which appears in the UI to help you identify it. By default, we generate a random two-word name allowing easy cross-reference runs from table to charts. Keeping these run names brief enhances readability in chart legends and tables. For saving hyperparameters, we recommend using the config field. |
notes |
A detailed description of the run, similar to a commit message in Git. Use this argument to capture any context or details that may help you recall the purpose or setup of this run in the future. |
tags |
A list of tags to label this run in the UI. Tags are helpful for organizing runs or adding temporary identifiers like “baseline” or “production.” You can easily add, remove tags, or filter by tags in the UI. If resuming a run, the tags provided here will replace any existing tags. To add tags to a resumed run without overwriting the current tags, use run.tags += ["new_tag"] after calling run = wandb.init() . |
config |
Sets wandb.config , a dictionary-like object for storing input parameters to your run, such as model hyperparameters or data preprocessing settings. The config appears in the UI in an overview page, allowing you to group, filter, and sort runs based on these parameters. Keys should not contain periods (. ), and values should be smaller than 10 MB. If a dictionary, argparse.Namespace , or absl.flags.FLAGS is provided, the key-value pairs will be loaded directly into wandb.config . If a string is provided, it is interpreted as a path to a YAML file, from which configuration values will be loaded into wandb.config . |
config_exclude_keys |
A list of specific keys to exclude from wandb.config . |
config_include_keys |
A list of specific keys to include in wandb.config . |
allow_val_change |
Controls whether config values can be modified after their initial set. By default, an exception is raised if a config value is overwritten. For tracking variables that change during training, such as a learning rate, consider using wandb.log() instead. By default, this is False in scripts and True in Notebook environments. |
group |
Specify a group name to organize individual runs as part of a larger experiment. This is useful for cases like cross-validation or running multiple jobs that train and evaluate a model on different test sets. Grouping allows you to manage related runs collectively in the UI, making it easy to toggle and review results as a unified experiment. For more information, refer to the guide to grouping runs. |
job_type |
Specify the type of run, especially helpful when organizing runs within a group as part of a larger experiment. For example, in a group, you might label runs with job types such as “train” and “eval”. Defining job types enables you to easily filter and group similar runs in the UI, facilitating direct comparisons. |
mode |
Specifies how run data is managed, with the following options: - "online" (default): Enables live syncing with W&B when a network connection is available, with real-time updates to visualizations. - "offline" : Suitable for air-gapped or offline environments; data is saved locally and can be synced later. Ensure the run folder is preserved to enable future syncing. - "disabled" : Disables all W&B functionality, making the run’s methods no-ops. Typically used in testing to bypass W&B operations. |
force |
Determines if a W&B login is required to run the script. If True , the user must be log in to W&B; otherwise, the script will not proceed. If False (default), the script can proceed without a login, switching to offline mode if the user is not logged in. |
anonymous |
Specifies the level of control over anonymous data logging. Available options are: - "never" (default): Requires you to link your W&B account before tracking the run. This prevents unintentional creation of anonymous runs by ensuring each run is associated with an account. - "allow" : Enables a logged-in user to track runs with their account, but also allows someone running the script without a W&B account to view the charts and data in the UI. - "must" : Forces the run to be logged to an anonymous account, even if the user is logged in. |
reinit |
Determines if multiple wandb.init() calls can start new runs within the same process. By default (False ), if an active run exists, calling wandb.init() returns the existing run instead of creating a new one. When reinit=True , the active run is finished before a new run is initialized. In notebook environments, runs are reinitialized by default unless reinit is explicitly set to False . |
resume |
Controls the behavior when resuming a run with the specified id . Available options are: - "allow" : If a run with the specified id exists, it will resume from the last step; otherwise, a new run will be created. - "never" : If a run with the specified id exists, an error will be raised. If no such run is found, a new run will be created. - "must" : If a run with the specified id exists, it will resume from the last step. If no run is found, an error will be raised. - "auto" : Automatically resumes the previous run if it crashed on this machine; otherwise, starts a new run. - True : Deprecated. Use "auto" instead. - False : Deprecated. Use the default behavior (leaving resume unset) to always start a new run. Note: If resume is set, fork_from and resume_from cannot be used. When resume is unset, the system will always start a new run. For more details, see the guide to resuming runs. |
resume_from |
Specifies a moment in a previous run to resume a run from, using the format {run_id}?_step={step} . This allows users to truncate the history logged to a run at an intermediate step and resume logging from that step. The target run must be in the same project. If an id argument is also provided, the resume_from argument will take precedence. resume , resume_from and fork_from cannot be used together, only one of them can be used at a time. Note: This feature is in beta and may change in the future. |
fork_from |
Specifies a point in a previous run from which to fork a new run, using the format {id}?_step={step} . This creates a new run that resumes logging from the specified step in the target run’s history. The target run must be part of the current project. If an id argument is also provided, it must be different from the fork_from argument, an error will be raised if they are the same. resume , resume_from and fork_from cannot be used together, only one of them can be used at a time. Note: This feature is in beta and may change in the future. |
save_code |
Enables saving the main script or notebook to W&B, aiding in experiment reproducibility and allowing code comparisons across runs in the UI. By default, this is disabled, but you can change the default to enable on your settings page. |
tensorboard |
Deprecated. Use sync_tensorboard instead. |
sync_tensorboard |
Enables automatic syncing of W&B logs from TensorBoard or TensorBoardX, saving relevant event files for viewing in the W&B UI. saving relevant event files for viewing in the W&B UI. (Default: False ) |
monitor_gym |
Enables automatic logging of videos of the environment when using OpenAI Gym. For additional details, see the guide for gym integration. |
settings |
Specifies a dictionary or wandb.Settings object with advanced settings for the run. |
Returns |
|
A Run object, which is a handle to the current run. Use this object to perform operations like logging data, saving files, and finishing the run. See the Run API for more details. |
|
Raises |
|
Error |
If some unknown or internal error happened during the run initialization. |
AuthenticationError |
If the user failed to provide valid credentials. |
CommError |
If there was a problem communicating with the W&B server. |
UsageError |
If the user provided invalid arguments to the function. |
KeyboardInterrupt |
If the user interrupts the run initialization process. If the user interrupts the run initialization process. |
3.8 - Integrations
Modules
keras
module: Tools for integrating wandb
with Keras
.
3.8.1 - Integrations
Modules
keras
module: Tools for integrating wandb
with Keras
.
3.8.2 - Keras
Tools for integrating wandb
with Keras
.
Classes
class WandbCallback
: WandbCallback
automatically integrates keras with wandb.
class WandbEvalCallback
: Abstract base class to build Keras callbacks for model prediction visualization.
class WandbMetricsLogger
: Logger that sends system metrics to W&B.
class WandbModelCheckpoint
: A checkpoint that periodically saves a Keras model or model weights.
3.8.2.1 - keras
Tools for integrating wandb
with Keras
.
Classes
class WandbCallback
: WandbCallback
automatically integrates keras with wandb.
class WandbEvalCallback
: Abstract base class to build Keras callbacks for model prediction visualization.
class WandbMetricsLogger
: Logger that sends system metrics to W&B.
class WandbModelCheckpoint
: A checkpoint that periodically saves a Keras model or model weights.
3.8.2.2 - WandbCallback
WandbCallback
automatically integrates keras with wandb.
WandbCallback(
monitor="val_loss", verbose=0, mode="auto", save_weights_only=(False),
log_weights=(False), log_gradients=(False), save_model=(True),
training_data=None, validation_data=None, labels=None, predictions=36,
generator=None, input_type=None, output_type=None, log_evaluation=(False),
validation_steps=None, class_colors=None, log_batch_frequency=None,
log_best_prefix="best_", save_graph=(True), validation_indexes=None,
validation_row_processor=None, prediction_row_processor=None,
infer_missing_processors=(True), log_evaluation_frequency=0,
compute_flops=(False), **kwargs
)
Example:
model.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
callbacks=[WandbCallback()],
)
WandbCallback
will automatically log history data from any
metrics collected by keras: loss and anything passed into keras_model.compile()
.
WandbCallback
will set summary metrics for the run associated with the “best” training
step, where “best” is defined by the monitor
and mode
attributes. This defaults
to the epoch with the minimum val_loss
. WandbCallback
will by default save the model
associated with the best epoch
.
WandbCallback
can optionally log gradient and parameter histograms.
WandbCallback
can optionally save training and validation data for wandb to visualize.
Args |
|
monitor |
(str) name of metric to monitor. Defaults to val_loss . |
mode |
(str) one of {auto , min , max }. min - save model when monitor is minimized max - save model when monitor is maximized auto - try to guess when to save the model (default). |
save_model |
True - save a model when monitor beats all previous epochs False - don’t save models |
save_graph |
(boolean) if True save model graph to wandb (default to True). |
save_weights_only |
(boolean) if True, then only the model’s weights will be saved (model.save_weights(filepath) ), else the full model is saved (model.save(filepath) ). |
log_weights |
(boolean) if True save histograms of the model’s layer’s weights. |
log_gradients |
(boolean) if True log histograms of the training gradients |
training_data |
(tuple) Same format (X,y) as passed to model.fit . This is needed for calculating gradients - this is mandatory if log_gradients is True . |
validation_data |
(tuple) Same format (X,y) as passed to model.fit . A set of data for wandb to visualize. If this is set, every epoch, wandb will make a small number of predictions and save the results for later visualization. In case you are working with image data, please also set input_type and output_type in order to log correctly. |
generator |
(generator) a generator that returns validation data for wandb to visualize. This generator should return tuples (X,y) . Either validate_data or generator should be set for wandb to visualize specific data examples. In case you are working with image data, please also set input_type and output_type in order to log correctly. |
validation_steps |
(int) if validation_data is a generator, how many steps to run the generator for the full validation set. |
labels |
(list) If you are visualizing your data with wandb this list of labels will convert numeric output to understandable string if you are building a multiclass classifier. If you are making a binary classifier you can pass in a list of two labels [label for false , label for true ]. If validate_data and generator are both false, this won’t do anything. |
predictions |
(int) the number of predictions to make for visualization each epoch, max is 100. |
input_type |
(string) type of the model input to help visualization. can be one of: (image , images , segmentation_mask , auto ). |
output_type |
(string) type of the model output to help visualization. can be one of: (image , images , segmentation_mask , label ). |
log_evaluation |
(boolean) if True, save a Table containing validation data and the model’s predictions at each epoch. See validation_indexes , validation_row_processor , and output_row_processor for additional details. |
class_colors |
([float, float, float]) if the input or output is a segmentation mask, an array containing an rgb tuple (range 0-1) for each class. |
log_batch_frequency |
(integer) if None, callback will log every epoch. If set to integer, callback will log training metrics every log_batch_frequency batches. |
log_best_prefix |
(string) if None, no extra summary metrics will be saved. If set to a string, the monitored metric and epoch will be prepended with this value and stored as summary metrics. |
validation_indexes |
([wandb.data_types._TableLinkMixin]) an ordered list of index keys to associate with each validation example. If log_evaluation is True and validation_indexes is provided, then a Table of validation data will not be created and instead each prediction will be associated with the row represented by the TableLinkMixin . The most common way to obtain such keys are is use Table.get_index() which will return a list of row keys. |
validation_row_processor |
(Callable) a function to apply to the validation data, commonly used to visualize the data. The function will receive an ndx (int) and a row (dict). If your model has a single input, then row["input"] will be the input data for the row. Else, it will be keyed based on the name of the input slot. If your fit function takes a single target, then row["target"] will be the target data for the row. Else, it will be keyed based on the name of the output slots. For example, if your input data is a single ndarray, but you wish to visualize the data as an Image, then you can provide lambda ndx, row: {"img": wandb.Image(row["input"])} as the processor. Ignored if log_evaluation is False or validation_indexes are present. |
output_row_processor |
(Callable) same as validation_row_processor , but applied to the model’s output. row["output"] will contain the results of the model output. |
infer_missing_processors |
(bool) Determines if validation_row_processor and output_row_processor should be inferred if missing. Defaults to True. If labels are provided, we will attempt to infer classification-type processors where appropriate. |
log_evaluation_frequency |
(int) Determines the frequency which evaluation results will be logged. Default 0 (only at the end of training). Set to 1 to log every epoch, 2 to log every other epoch, and so on. Has no effect when log_evaluation is False. |
compute_flops |
(bool) Compute the FLOPs of your Keras Sequential or Functional model in GigaFLOPs unit. |
Methods
get_flops
View source
Calculate FLOPS [GFLOPs] for a tf.keras.Model
or tf.keras.Sequential
model in inference mode.
It uses tf.compat.v1.profiler
under the hood.
set_model
View source
set_params
View source
3.8.2.3 - WandbEvalCallback
Abstract base class to build Keras callbacks for model prediction visualization.
WandbEvalCallback(
data_table_columns: List[str],
pred_table_columns: List[str],
*args,
**kwargs
) -> None
You can build callbacks for visualizing model predictions on_epoch_end
that can be passed to model.fit()
for classification, object detection,
segmentation, etc. tasks.
To use this, inherit from this base callback class and implement the
add_ground_truth
and add_model_prediction
methods.
The base class will take care of the following:
- Initialize
data_table
for logging the ground truth and
pred_table
for predictions.
- The data uploaded to
data_table
is used as a reference for the
pred_table
. This is to reduce the memory footprint. The data_table_ref
is a list that can be used to access the referenced data.
Check out the example below to see how it’s done.
- Log the tables to W&B as W&B Artifacts.
- Each new
pred_table
is logged as a new version with aliases.
Example:
class WandbClfEvalCallback(WandbEvalCallback):
def __init__(self, validation_data, data_table_columns, pred_table_columns):
super().__init__(data_table_columns, pred_table_columns)
self.x = validation_data[0]
self.y = validation_data[1]
def add_ground_truth(self):
for idx, (image, label) in enumerate(zip(self.x, self.y)):
self.data_table.add_data(idx, wandb.Image(image), label)
def add_model_predictions(self, epoch):
preds = self.model.predict(self.x, verbose=0)
preds = tf.argmax(preds, axis=-1)
data_table_ref = self.data_table_ref
table_idxs = data_table_ref.get_index()
for idx in table_idxs:
pred = preds[idx]
self.pred_table.add_data(
epoch,
data_table_ref.data[idx][0],
data_table_ref.data[idx][1],
data_table_ref.data[idx][2],
pred,
)
model.fit(
x,
y,
epochs=2,
validation_data=(x, y),
callbacks=[
WandbClfEvalCallback(
validation_data=(x, y),
data_table_columns=["idx", "image", "label"],
pred_table_columns=["epoch", "idx", "image", "label", "pred"],
)
],
)
To have more fine-grained control, you can override the on_train_begin
and
on_epoch_end
methods. If you want to log the samples after N batched, you
can implement on_train_batch_end
method.
Methods
add_ground_truth
View source
@abc.abstractmethod
add_ground_truth(
logs: Optional[Dict[str, float]] = None
) -> None
Add ground truth data to data_table
.
Use this method to write the logic for adding validation/training data to
data_table
initialized using init_data_table
method.
Example:
for idx, data in enumerate(dataloader):
self.data_table.add_data(idx, data)
This method is called once on_train_begin
or equivalent hook.
add_model_predictions
View source
@abc.abstractmethod
add_model_predictions(
epoch: int,
logs: Optional[Dict[str, float]] = None
) -> None
Add a prediction from a model to pred_table
.
Use this method to write the logic for adding model prediction for validation/
training data to pred_table
initialized using init_pred_table
method.
Example:
# Assuming the dataloader is not shuffling the samples.
for idx, data in enumerate(dataloader):
preds = model.predict(data)
self.pred_table.add_data(
self.data_table_ref.data[idx][0],
self.data_table_ref.data[idx][1],
preds,
)
This method is called on_epoch_end
or equivalent hook.
init_data_table
View source
init_data_table(
column_names: List[str]
) -> None
Initialize the W&B Tables for validation data.
Call this method on_train_begin
or equivalent hook. This is followed by adding
data to the table row or column wise.
Args |
|
column_names |
(list) Column names for W&B Tables. |
init_pred_table
View source
init_pred_table(
column_names: List[str]
) -> None
Initialize the W&B Tables for model evaluation.
Call this method on_epoch_end
or equivalent hook. This is followed by adding
data to the table row or column wise.
Args |
|
column_names |
(list) Column names for W&B Tables. |
log_data_table
View source
log_data_table(
name: str = "val",
type: str = "dataset",
table_name: str = "val_data"
) -> None
Log the data_table
as W&B artifact and call use_artifact
on it.
This lets the evaluation table use the reference of already uploaded data
(images, text, scalar, etc.) without re-uploading.
Args |
|
name |
(str) A human-readable name for this artifact, which is how you can identify this artifact in the UI or reference it in use_artifact calls. (default is ‘val’) |
type |
(str) The type of the artifact, which is used to organize and differentiate artifacts. (default is ‘dataset’) |
table_name |
(str) The name of the table as will be displayed in the UI. (default is ‘val_data’). |
log_pred_table
View source
log_pred_table(
type: str = "evaluation",
table_name: str = "eval_data",
aliases: Optional[List[str]] = None
) -> None
Log the W&B Tables for model evaluation.
The table will be logged multiple times creating new version. Use this
to compare models at different intervals interactively.
Args |
|
type |
(str) The type of the artifact, which is used to organize and differentiate artifacts. (default is ’evaluation’) |
table_name |
(str) The name of the table as will be displayed in the UI. (default is ’eval_data') |
aliases |
(List[str]) List of aliases for the prediction table. |
set_model
set_params
3.8.2.4 - WandbMetricsLogger
Logger that sends system metrics to W&B.
WandbMetricsLogger(
log_freq: Union[LogStrategy, int] = "epoch",
initial_global_step: int = 0,
*args,
**kwargs
) -> None
WandbMetricsLogger
automatically logs the logs
dictionary that callback methods
take as argument to wandb.
This callback automatically logs the following to a W&B run page:
- system (CPU/GPU/TPU) metrics,
- train and validation metrics defined in
model.compile
,
- learning rate (both for a fixed value or a learning rate scheduler)
Notes:
If you resume training by passing initial_epoch
to model.fit
and you are using a
learning rate scheduler, make sure to pass initial_global_step
to
WandbMetricsLogger
. The initial_global_step
is step_size * initial_step
, where
step_size
is number of training steps per epoch. step_size
can be calculated as
the product of the cardinality of the training dataset and the batch size.
Args |
|
log_freq |
(epoch , batch , or an int ) if epoch , logs metrics at the end of each epoch. If batch , logs metrics at the end of each batch. If an int , logs metrics at the end of that many batches. Defaults to epoch . |
initial_global_step |
(int) Use this argument to correctly log the learning rate when you resume training from some initial_epoch , and a learning rate scheduler is used. This can be computed as step_size * initial_step . Defaults to 0. |
Methods
set_model
set_params
3.8.2.5 - WandbModelCheckpoint
A checkpoint that periodically saves a Keras model or model weights.
WandbModelCheckpoint(
filepath: StrPath,
monitor: str = "val_loss",
verbose: int = 0,
save_best_only: bool = (False),
save_weights_only: bool = (False),
mode: Mode = "auto",
save_freq: Union[SaveStrategy, int] = "epoch",
initial_value_threshold: Optional[float] = None,
**kwargs
) -> None
Saved weights are uploaded to W&B as a wandb.Artifact
.
Since this callback is subclassed from tf.keras.callbacks.ModelCheckpoint
, the
checkpointing logic is taken care of by the parent callback. You can learn more
here: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint
This callback is to be used in conjunction with training using model.fit()
to save
a model or weights (in a checkpoint file) at some interval. The model checkpoints
will be logged as W&B Artifacts. You can learn more here:
https://docs.wandb.ai/guides/core/artifacts
This callback provides the following features:
- Save the model that has achieved
best performance
based on monitor
.
- Save the model at the end of every epoch regardless of the performance.
- Save the model at the end of epoch or after a fixed number of training batches.
- Save only model weights, or save the whole model.
- Save the model either in SavedModel format or in
.h5
format.
Args |
|
filepath |
(Union[str, os.PathLike]) path to save the model file. filepath can contain named formatting options, which will be filled by the value of epoch and keys in logs (passed in on_epoch_end ). For example: if filepath is model-{epoch:02d}-{val_loss:.2f} , then the model checkpoints will be saved with the epoch number and the validation loss in the filename. |
monitor |
(str) The metric name to monitor. Default to val_loss . |
verbose |
(int) Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action. |
save_best_only |
(bool) if save_best_only=True , it only saves when the model is considered the “best” and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn’t contain formatting options like {epoch} then filepath will be overwritten by each new better model locally. The model logged as an artifact will still be associated with the correct monitor . Artifacts will be uploaded continuously and versioned separately |
as a new best model is found. |
|
save_weights_only |
(bool) if True, then only the model’s weights will be saved. |
mode |
(Mode) one of {‘auto’, ‘min’, ‘max’}. For val_acc , this should be max , for val_loss this should be `mi |
n`, etc. |
|
save_freq |
(Union[SaveStrategy, int]) epoch or integer. When using 'epoch' , the callback saves the model after |
each epoch. When using an integer, the callback saves the model at end of this many batches. Note that when monitoring |
|
validation metrics such as val_acc or val_loss , save_freq must be set to “epoch” as those metrics are only available |
|
at the end of an epoch. |
|
initial_value_threshold |
(Optional[float]) Floating point initial “best” value of the metric to be monitored. |
Methods
set_model
set_params
3.8.3 - WandbTracer
View source on GitHub
Callback Handler that logs to Weights and Biases.
This handler will log the model architecture and run traces to Weights and Biases. This will ensure that all LangChain activity is logged to W&B.
Attributes |
|
always_verbose |
Whether to call verbose callbacks even if verbose is False. |
ignore_agent |
Whether to ignore agent callbacks. |
ignore_chain |
Whether to ignore chain callbacks. |
ignore_llm |
Whether to ignore LLM callbacks. |
Methods
finish
View source
@staticmethod
finish() -> None
Waits for all asynchronous processes to finish and data to upload.
finish_run
View source
Waits for W&B data to upload.
init
View source
@classmethod
init(
run_args: Optional[WandbRunArgs] = None,
include_stdout: bool = (True),
additional_handlers: Optional[List['BaseCallbackHandler']] = None
) -> None
Sets up a WandbTracer and makes it the default handler.
Parameters:
run_args
: (dict, optional) Arguments to pass to wandb.init()
. If not provided, wandb.init()
will be
called with no arguments. Please refer to the wandb.init
for more details.
include_stdout
: (bool, optional) If True, the StdOutCallbackHandler
will be added to the list of
handlers. This is common practice when using LangChain as it prints useful information to stdout.
additional_handlers
: (list, optional) A list of additional handlers to add to the list of LangChain handlers.
To use W&B to
monitor all LangChain activity, simply call this function at the top of
the notebook or script:
from wandb.integration.langchain import WandbTracer
WandbTracer.init()
# ...
# end of notebook / script:
WandbTracer.finish()
```.
It is safe to call this repeatedly with the same arguments (such as in a
notebook), as it will only create a new run if the run_args differ.
### `init_run`
[View source](https://www.github.com/wandb/client/tree/c4726707ed83ebb270a2cf84c4fd17b8684ff699/wandb/integration/langchain/wandb_tracer.py#L164-L200)
```python
init_run(
run_args: Optional[WandbRunArgs] = None
) -> None
Initialize wandb if it has not been initialized.
Parameters:
run_args
: (dict, optional) Arguments to pass to wandb.init()
. If not provided, wandb.init()
will be
called with no arguments. Please refer to the wandb.init
for more details.
We only want to start a new run if the run args differ. This will reduce
the number of W&B runs created, which is more ideal in a notebook
setting. Note: it is uncommon to call this method directly. Instead, you
should use the WandbTracer.init()
method. This method is exposed if you
want to manually initialize the tracer and add it to the list of handlers.
load_default_session
View source
load_default_session() -> "TracerSession"
Load the default tracing session and set it as the Tracer’s session.
load_session
View source
load_session(
session_name: str
) -> "TracerSession"
Load a session from the tracer.
new_session
new_session(
name: Optional[str] = None,
**kwargs
) -> TracerSession
NOT thread safe, do not call this method from multiple threads.
on_agent_action
on_agent_action(
action: AgentAction,
**kwargs
) -> Any
Do nothing.
on_agent_finish
on_agent_finish(
finish: AgentFinish,
**kwargs
) -> None
Handle an agent finish message.
on_chain_end
on_chain_end(
outputs: Dict[str, Any],
**kwargs
) -> None
End a trace for a chain run.
on_chain_error
on_chain_error(
error: Union[Exception, KeyboardInterrupt],
**kwargs
) -> None
Handle an error for a chain run.
on_chain_start
on_chain_start(
serialized: Dict[str, Any],
inputs: Dict[str, Any],
**kwargs
) -> None
Start a trace for a chain run.
on_llm_end
on_llm_end(
response: LLMResult,
**kwargs
) -> None
End a trace for an LLM run.
on_llm_error
on_llm_error(
error: Union[Exception, KeyboardInterrupt],
**kwargs
) -> None
Handle an error for an LLM run.
on_llm_new_token
on_llm_new_token(
token: str,
**kwargs
) -> None
Handle a new token for an LLM run.
on_llm_start
on_llm_start(
serialized: Dict[str, Any],
prompts: List[str],
**kwargs
) -> None
Start a trace for an LLM run.
on_text
on_text(
text: str,
**kwargs
) -> None
Handle a text message.
on_tool_end(
output: str,
**kwargs
) -> None
End a trace for a tool run.
on_tool_error(
error: Union[Exception, KeyboardInterrupt],
**kwargs
) -> None
Handle an error for a tool run.
on_tool_start(
serialized: Dict[str, Any],
input_str: str,
**kwargs
) -> None
Start a trace for a tool run.
3.9 - launch-library
Classes
class LaunchAgent
: Launch agent class which polls run given run queues and launches runs for wandb launch.
Functions
launch(...)
: Launch a W&B launch experiment.
launch_add(...)
: Enqueue a W&B launch experiment by uri
, job
, or docker_image
.
3.9.1 - launch
Launch a W&B launch experiment.
launch(
api: Api,
job: Optional[str] = None,
entry_point: Optional[List[str]] = None,
version: Optional[str] = None,
name: Optional[str] = None,
resource: Optional[str] = None,
resource_args: Optional[Dict[str, Any]] = None,
project: Optional[str] = None,
entity: Optional[str] = None,
docker_image: Optional[str] = None,
config: Optional[Dict[str, Any]] = None,
synchronous: Optional[bool] = (True),
run_id: Optional[str] = None,
repository: Optional[str] = None
) -> AbstractRun
Arguments |
|
job |
string reference to a wandb.Job , such as wandb/test/my-job:latest . |
api |
An instance of a wandb Api from wandb.apis.internal . |
entry_point |
Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs. |
version |
For Git-based projects, either a commit hash or a branch name. |
name |
Name run under which to launch the run. |
resource |
Execution backend for the run. |
resource_args |
Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config under resource_args . |
project |
Target project to send launched run to. |
entity |
Target entity to send launched run to. |
config |
A dictionary containing the configuration for the run. May also contain resource specific arguments under the key resource_args" . |
synchronous |
Whether to block while waiting for a run to complete. Defaults to True. If synchronous is False and backend is local-container , this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated. If synchronous is True and the run fails, the current process will error out as well. |
run_id |
ID for the run (To ultimately replace the :name: field). |
repository |
string name of repository path for remote registry. |
Example:
from wandb.sdk.launch import launch
job = "wandb/jobs/Hello World:latest"
params = {"epochs": 5}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch(api, job, parameters=params)
Returns |
|
an instance ofwandb.launch.SubmittedRun exposing information about the launched run, wuch as the run ID. |
|
Raises |
|
wandb.exceptions.ExecutionError If a run launched in blocking mode is unsuccessful. |
|
3.9.2 - launch_add
Enqueue a W&B launch experiment with a source uri
, job
or docker_image
.
launch_add(
uri: Optional[str] = None,
job: Optional[str] = None,
config: Optional[Dict[str, Any]] = None,
template_variables: Optional[Dict[str, Union[float, int, str]]] = None,
project: Optional[str] = None,
entity: Optional[str] = None,
queue_name: Optional[str] = None,
resource: Optional[str] = None,
entry_point: Optional[List[str]] = None,
name: Optional[str] = None,
version: Optional[str] = None,
docker_image: Optional[str] = None,
project_queue: Optional[str] = None,
resource_args: Optional[Dict[str, Any]] = None,
run_id: Optional[str] = None,
build: Optional[bool] = (False),
repository: Optional[str] = None,
sweep_id: Optional[str] = None,
author: Optional[str] = None,
priority: Optional[int] = None
) -> "public.QueuedRun"
Arguments |
|
uri |
URI of experiment to run. A wandb run uri or a Git repository URI. |
job |
string reference to a wandb.Job eg: wandb/test/my-job:latest |
config |
A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args” |
template_variables |
A dictionary containing values of template variables for a run queue. Expected format of {"VAR_NAME": VAR_VALUE} |
project |
Target project to send launched run to |
entity |
Target entity to send launched run to |
queue |
the name of the queue to enqueue the run to |
priority |
the priority level of the job, where 1 is the highest priority |
resource |
Execution backend for the run: W&B provides built-in support for “local-container” backend |
entry_point |
Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs. |
name |
Name run under which to launch the run. |
version |
For Git-based projects, either a commit hash or a branch name. |
docker_image |
The name of the docker image to use for the run. |
resource_args |
Resource related arguments for launching runs onto a remote backend. Will be stored on the construc ted launch config under resource_args . |
run_id |
optional string indicating the id of the launched run |
build |
optional flag defaulting to false, requires queue to be set if build, an image is created, creates a job ar tifact, pushes a reference to that job artifact to queue |
repository |
optional string to control the name of the remote repository, used when pushing images to a registry |
project_queue |
optional string to control the name of the project for the queue. Primarily used for back compatibility with project scoped queues |
Example:
from wandb.sdk.launch import launch_add
project_uri = "https://github.com/wandb/examples"
params = {"alpha": 0.5, "l1_ratio": 0.01}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch_add(uri=project_uri, parameters=params)
Returns |
|
an instance of wandb.api.public.QueuedRun which gives information about the queued run, or if wait_until_started or wait_until_finished are called, gives access to the underlying Run information. |
|
Raises |
|
wandb.exceptions.LaunchError if unsuccessful |
|
3.9.3 - launch-library
Classes
class LaunchAgent
: Launch agent class which polls run given run queues and launches runs for wandb launch.
Functions
launch(...)
: Launch a W&B launch experiment.
launch_add(...)
: Enqueue a W&B launch experiment. With either a source uri
, job
, or docker_image
.
3.9.4 - LaunchAgent
Launch agent class which polls run given run queues and launches runs for wandb launch.
LaunchAgent(
api: Api,
config: Dict[str, Any]
)
Arguments |
|
api |
Api object to use for making requests to the backend. |
config |
Config dictionary for the agent. |
Attributes |
|
num_running_jobs |
Return the number of jobs not including schedulers. |
num_running_schedulers |
Return just the number of schedulers. |
thread_ids |
Returns a list of keys running thread ids for the agent. |
Methods
check_sweep_state
View source
check_sweep_state(
launch_spec, api
)
Check the state of a sweep before launching a run for the sweep.
fail_run_queue_item
View source
fail_run_queue_item(
run_queue_item_id, message, phase, files=None
)
finish_thread_id
View source
finish_thread_id(
thread_id, exception=None
)
Removes the job from our list for now.
get_job_and_queue
View source
initialized
View source
@classmethod
initialized() -> bool
Return whether the agent is initialized.
loop
View source
Loop infinitely to poll for jobs and run them.
Raises |
|
KeyboardInterrupt |
if the agent is requested to stop. |
name
View source
@classmethod
name() -> str
Return the name of the agent.
pop_from_queue
View source
Pops an item off the runqueue to run as a job.
Arguments |
|
queue |
Queue to pop from. |
Returns |
|
Item popped off the queue. |
|
Raises |
|
Exception |
if there is an error popping from the queue. |
print_status
View source
Prints the current status of the agent.
run_job
View source
run_job(
job, queue, file_saver
)
Set up project and run the job.
Arguments |
|
job |
Job to run. |
task_run_job
View source
task_run_job(
launch_spec, job, default_config, api, job_tracker
)
update_status
View source
Update the status of the agent.
Arguments |
|
status |
Status to update the agent to. |
3.10 - log
Upload run data.
log(
data: dict[str, Any],
step: (int | None) = None,
commit: (bool | None) = None,
sync: (bool | None) = None
) -> None
Use log
to log data from runs, such as scalars, images, video,
histograms, plots, and tables.
See our guides to logging for
live examples, code snippets, best practices, and more.
The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9})
.
This will save the loss and accuracy to the run’s history and update
the summary values for these metrics.
Visualize logged data in the workspace at wandb.ai,
or locally on a self-hosted instance
of the W&B app, or export data to visualize and explore locally, such as in a Jupyter notebook, with our API.
Logged values don’t have to be scalars. Logging any wandb object is supported.
For example run.log({"example": wandb.Image("myimage.jpg")})
will log an
example image which will be displayed nicely in the W&B UI.
See the reference documentation
for all of the different supported types or check out our
guides to logging for examples,
from 3D molecular structures and segmentation masks to PR curves and histograms.
You can use wandb.Table
to log structured data. See our
guide to logging tables
for details.
The W&B UI organizes metrics with a forward slash (/
) in their name
into sections named using the text before the final slash. For example,
the following results in two sections named “train” and “validate”:
run.log(
{
"train/accuracy": 0.9,
"train/loss": 30,
"validate/accuracy": 0.8,
"validate/loss": 20,
}
)
Only one level of nesting is supported. run.log({"a/b/c": 1})
produces a section named "a/b"
.
run.log
is not intended to be called more than a few times per second.
For optimal performance, limit your logging to once every N iterations,
or collect data over multiple iterations and log it in a single step.
The W&B step
With basic usage, each call to log
creates a new step
.
The step must always increase, and it is not possible to log
to a previous step.
Note that you can use any metric as the X axis in charts.
In many cases, it is better to treat the W&B step like
you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
See also define_metric.
It is possible to use multiple log
invocations to log to
the same step with the step
and commit
parameters.
The following are all equivalent:
# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})
# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})
# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args |
|
data |
A dict with str keys and values that are serializable Python objects including: int , float and string ; any of the wandb.data_types ; lists, tuples and NumPy arrays of serializable Python objects; other dict s of this structure. |
step |
The step number to log. If None , then an implicit auto-incrementing step is used. See the notes in the description. |
commit |
If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None , then the default is commit=True ; otherwise, the default is commit=False . |
sync |
This argument is deprecated and does nothing. |
Examples:
For more and more detailed examples, see
our guides to logging.
Basic usage
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
Image from numpy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0,
high=256,
size=(100, 100, 3),
dtype=np.uint8,
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Video from numpy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0,
high=256,
size=(10, 3, 100, 100),
dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})
Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
3D Object
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
Raises |
|
wandb.Error |
if called before wandb.init |
ValueError |
if invalid data is passed |
3.11 - login
Set up W&B login credentials.
login(
anonymous: Optional[Literal['must', 'allow', 'never']] = None,
key: Optional[str] = None,
relogin: Optional[bool] = None,
host: Optional[str] = None,
force: Optional[bool] = None,
timeout: Optional[int] = None,
verify: bool = (False)
) -> bool
By default, this will only store credentials locally without
verifying them with the W&B server. To verify credentials, pass
verify=True
.
Args |
|
anonymous |
(string, optional) Can be must , allow , or never . If set to must , always log a user in anonymously. If set to allow , only create an anonymous user if the user isn’t already logged in. If set to never , never log a user anonymously. Default set to never . |
key |
(string, optional) The API key to use. |
relogin |
(bool, optional) If true, will re-prompt for API key. |
host |
(string, optional) The host to connect to. |
force |
(bool, optional) If true, will force a relogin. |
timeout |
(int, optional) Number of seconds to wait for user input. |
verify |
(bool) Verify the credentials with the W&B server. |
Returns |
|
bool |
if key is configured |
Raises |
|
AuthenticationError - if api_key fails verification with the server UsageError - if api_key cannot be configured and no tty |
|
3.12 - Run
A unit of computation logged by wandb. Typically, this is an ML experiment.
Run(
settings: Settings,
config: (dict[str, Any] | None) = None,
sweep_config: (dict[str, Any] | None) = None,
launch_config: (dict[str, Any] | None) = None
) -> None
Create a run with wandb.init()
:
import wandb
run = wandb.init()
There is only ever at most one active wandb.Run
in any process,
and it is accessible as wandb.run
:
import wandb
assert wandb.run is None
wandb.init()
assert wandb.run is not None
anything you log with wandb.log
will be sent to that run.
If you want to start more runs in the same script or notebook, you’ll need to
finish the run that is in-flight. Runs can be finished with wandb.finish
or
by using them in a with
block:
import wandb
wandb.init()
wandb.finish()
assert wandb.run is None
with wandb.init() as run:
pass # log data here
assert wandb.run is None
See the documentation for wandb.init
for more on creating runs, or check out
our guide to wandb.init
.
In distributed training, you can either create a single run in the rank 0 process
and then log information only from that process, or you can create a run in each process,
logging from each separately, and group the results together with the group
argument
to wandb.init
. For more details on distributed training with W&B, check out
our guide.
Currently, there is a parallel Run
object in the wandb.Api
. Eventually these
two objects will be merged.
Attributes |
|
summary |
(Summary) Single values set for each wandb.log() key. By default, summary is set to the last value logged. You can manually set summary to the best value, like max accuracy, instead of the final value. |
config |
Config object associated with this run. |
dir |
The directory where files associated with the run are saved. |
entity |
The name of the W&B entity associated with the run. Entity can be a username or the name of a team or organization. |
group |
Name of the group associated with the run. Setting a group helps the W&B UI organize runs in a sensible way. If you are doing a distributed training you should give all of the runs in the training the same group. If you are doing cross-validation you should give all the cross-validation folds the same group. |
id |
Identifier for this run. |
mode |
For compatibility with 0.9.x and earlier, deprecate eventually. |
name |
Display name of the run. Display names are not guaranteed to be unique and may be descriptive. By default, they are randomly generated. |
notes |
Notes associated with the run, if there are any. Notes can be a multiline string and can also use markdown and latex equations inside $$ , like $x + 3$ . |
path |
Path to the run. Run paths include entity, project, and run ID, in the format entity/project/run_id . |
project |
Name of the W&B project associated with the run. |
resumed |
True if the run was resumed, False otherwise. |
settings |
A frozen copy of run’s Settings object. |
start_time |
Unix timestamp (in seconds) of when the run started. |
starting_step |
The first step of the run. |
step |
Current value of the step. This counter is incremented by wandb.log . |
sweep_id |
ID of the sweep associated with the run, if there is one. |
tags |
Tags associated with the run, if there are any. |
url |
The W&B url associated with the run. |
Methods
alert
View source
alert(
title: str,
text: str,
level: (str | AlertLevel | None) = None,
wait_duration: (int | float | timedelta | None) = None
) -> None
Launch an alert with the given title and text.
Args |
|
title |
(str) The title of the alert, must be less than 64 characters long. |
text |
(str) The text body of the alert. |
level |
(str or AlertLevel, optional) The alert level to use, either: INFO , WARN , or ERROR . |
wait_duration |
(int, float, or timedelta, optional) The time to wait (in seconds) before sending another alert with this title. |
define_metric
View source
define_metric(
name: str,
step_metric: (str | wandb_metric.Metric | None) = None,
step_sync: (bool | None) = None,
hidden: (bool | None) = None,
summary: (str | None) = None,
goal: (str | None) = None,
overwrite: (bool | None) = None
) -> wandb_metric.Metric
Customize metrics logged with wandb.log()
.
Args |
|
name |
The name of the metric to customize. |
step_metric |
The name of another metric to serve as the X-axis for this metric in automatically generated charts. |
step_sync |
Automatically insert the last value of step_metric into run.log() if it is not provided explicitly. Defaults to True if step_metric is specified. |
hidden |
Hide this metric from automatic plots. |
summary |
Specify aggregate metrics added to summary. Supported aggregations include min , max , mean , last , best , copy and none . best is used together with the goal parameter. none prevents a summary from being generated. copy is deprecated. |
goal |
Specify how to interpret the best summary type. Supported options are minimize and maximize . |
overwrite |
If false, then this call is merged with previous define_metric calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls. |
Returns |
|
An object that represents this call but can otherwise be discarded. |
|
detach
View source
display
View source
display(
height: int = 420,
hidden: bool = (False)
) -> bool
Display this run in jupyter.
finish
View source
finish(
exit_code: (int | None) = None,
quiet: (bool | None) = None
) -> None
Finish a run and upload any remaining data.
Marks the completion of a W&B run and ensures all data is synced to the server.
The run’s final state is determined by its exit conditions and sync status.
Run States:
- Running: Active run that is logging data and/or sending heartbeats.
- Crashed: Run that stopped sending heartbeats unexpectedly.
- Finished: Run completed successfully (
exit_code=0
) with all data synced.
- Failed: Run completed with errors (
exit_code!=0
).
Args |
|
exit_code |
Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed. |
quiet |
Deprecated. Configure logging verbosity using wandb.Settings(quiet=...) . |
finish_artifact
View source
finish_artifact(
artifact_or_path: (Artifact | str),
name: (str | None) = None,
type: (str | None) = None,
aliases: (list[str] | None) = None,
distributed_id: (str | None) = None
) -> Artifact
Finishes a non-finalized artifact as output of a run.
Subsequent “upserts” with the same distributed ID will result in a new version.
Args |
|
artifact_or_path |
(str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact . |
name |
(str, optional) An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified. |
type |
(str) The type of artifact to log, examples include dataset , model |
aliases |
(list, optional) Aliases to apply to this artifact, defaults to ["latest"] |
distributed_id |
(string, optional) Unique string that all distributed jobs share. If None, defaults to the run’s group name. |
Returns |
|
An Artifact object. |
|
get_project_url
View source
get_project_url() -> (str | None)
Return the url for the W&B project associated with the run, if there is one.
Offline runs will not have a project url.
get_sweep_url
View source
get_sweep_url() -> (str | None)
Return the url for the sweep associated with the run, if there is one.
get_url
View source
get_url() -> (str | None)
Return the url for the W&B run, if there is one.
Offline runs will not have a url.
join
View source
join(
exit_code: (int | None) = None
) -> None
Deprecated alias for finish()
- use finish instead.
link_artifact
View source
link_artifact(
artifact: Artifact,
target_path: str,
aliases: (list[str] | None) = None
) -> None
Link the given artifact to a portfolio (a promoted collection of artifacts).
The linked artifact will be visible in the UI for the specified portfolio.
Args |
|
artifact |
the (public or local) artifact which will be linked |
target_path |
str - takes the following forms: {portfolio} , {project}/{portfolio} , or {entity}/{project}/{portfolio} |
aliases |
List[str] - optional alias(es) that will only be applied on this linked artifact inside the portfolio. The alias “latest” will always be applied to the latest version of an artifact that is linked. |
link_model
View source
link_model(
path: StrPath,
registered_model_name: str,
name: (str | None) = None,
aliases: (list[str] | None) = None
) -> None
Log a model artifact version and link it to a registered model in the model registry.
The linked model version will be visible in the UI for the specified registered model.
Steps:
- Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files
located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’
of type ‘model’.
- Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project.
If not, create a new registered model with name ‘registered_model_name’.
- Link version of model artifact ’name’ to registered model, ‘registered_model_name’.
- Attach aliases from ‘aliases’ list to the newly linked model artifact version.
Args |
|
path |
(str) A path to the contents of this model, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path |
registered_model_name |
(str) - the name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the run |
name |
(str, optional) - the name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified. |
aliases |
(List[str], optional) - alias(es) that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked. |
Examples:
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.link_model(
path="/local/directory",
registered_model_name="my_entity/my_project/my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
Raises |
|
AssertionError |
if registered_model_name is a path or if model artifact ’name’ is of a type that does not contain the substring ‘model’ |
ValueError |
if name has invalid special characters |
log
View source
log(
data: dict[str, Any],
step: (int | None) = None,
commit: (bool | None) = None,
sync: (bool | None) = None
) -> None
Upload run data.
Use log
to log data from runs, such as scalars, images, video,
histograms, plots, and tables.
See our guides to logging for
live examples, code snippets, best practices, and more.
The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9})
.
This will save the loss and accuracy to the run’s history and update
the summary values for these metrics.
Visualize logged data in the workspace at wandb.ai
or locally on a self-hosted instance
of the W&B app. Use our API to export data to visualize and explore locally
Logged values don’t have to be scalars. Logging any wandb object is supported.
For example run.log({"example": wandb.Image("myimage.jpg")})
will log an
example image which will be displayed nicely in the W&B UI.
See the reference documentation
for all of the different supported types or check out our
guides to logging for examples,
from 3D molecular structures and segmentation masks to PR curves and histograms.
You can use wandb.Table
to log structured data. See our
guide to logging tables
for details.
The W&B UI organizes metrics with a forward slash (/
) in their name
into sections named using the text before the final slash. For example,
the following results in two sections named “train” and “validate”:
run.log(
{
"train/accuracy": 0.9,
"train/loss": 30,
"validate/accuracy": 0.8,
"validate/loss": 20,
}
)
Only one level of nesting is supported. run.log({"a/b/c": 1})
produces a section named "a/b"
.
run.log
is not intended to be called more than a few times per second.
For optimal performance, limit your logging to once every N iterations,
or collect data over multiple iterations and log it in a single step.
The W&B step
With basic usage, each call to log
creates a new step
.
The step must always increase, and it is not possible to log
to a previous step.
You can use any metric as the X axis in charts.
In many cases, it is better to treat the W&B step like
you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
See also define_metric.
It is possible to use multiple log
invocations to log to
the same step with the step
and commit
parameters.
The following are all equivalent:
# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})
# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})
# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args |
|
data |
A dict with str keys and values that are serializable Python objects including: int , float and string ; any of the wandb.data_types ; lists, tuples and NumPy arrays of serializable Python objects; other dict s of this structure. |
step |
The step number to log. If None , then an implicit auto-incrementing step is used. See the notes in the description. |
commit |
If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None , then the default is commit=True ; otherwise, the default is commit=False . |
sync |
This argument is deprecated and does nothing. |
Examples:
For more and more detailed examples, see
our guides to logging.
Basic usage
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
Image from numpy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Video from numpy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8
)
run.log({"video": wandb.Video(frames, fps=4)})
Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
3D Object
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
Raises |
|
wandb.Error |
if called before wandb.init |
ValueError |
if invalid data is passed |
log_artifact
View source
log_artifact(
artifact_or_path: (Artifact | StrPath),
name: (str | None) = None,
type: (str | None) = None,
aliases: (list[str] | None) = None,
tags: (list[str] | None) = None
) -> Artifact
Declare an artifact as an output of a run.
Args |
|
artifact_or_path |
(str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact . |
name |
(str, optional) An artifact name. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified. |
type |
(str) The type of artifact to log, examples include dataset , model |
aliases |
(list, optional) Aliases to apply to this artifact, defaults to ["latest"] |
tags |
(list, optional) Tags to apply to this artifact, if any. |
Returns |
|
An Artifact object. |
|
log_code
View source
log_code(
root: (str | None) = ".",
name: (str | None) = None,
include_fn: (Callable[[str, str], bool] | Callable[[str], bool]) = _is_py_requirements_or_dockerfile,
exclude_fn: (Callable[[str, str], bool] | Callable[[str], bool]) = filenames.exclude_wandb_fn
) -> (Artifact | None)
Save the current state of your code to a W&B Artifact.
By default, it walks the current directory and logs all files that end with .py
.
Args |
|
root |
The relative (to os.getcwd() ) or absolute path to recursively find code from. |
name |
(str, optional) The name of our code artifact. By default, we’ll name the artifact source-$PROJECT_ID-$ENTRYPOINT_RELPATH . There may be scenarios where you want many runs to share the same artifact. Specifying name allows you to achieve that. |
include_fn |
A callable that accepts a file path and (optionally) root path and returns True when it should be included and False otherwise. This defaults to: lambda path, root: path.endswith(".py") |
exclude_fn |
A callable that accepts a file path and (optionally) root path and returns True when it should be excluded and False otherwise. This defaults to a function that excludes all files within <root>/.wandb/ and <root>/wandb/ directories. |
Examples:
Basic usage
Advanced usage
run.log_code(
"../",
include_fn=lambda path: path.endswith(".py") or path.endswith(".ipynb"),
exclude_fn=lambda path, root: os.path.relpath(path, root).startswith(
"cache/"
),
)
Returns |
|
An Artifact object if code was logged |
|
log_model
View source
log_model(
path: StrPath,
name: (str | None) = None,
aliases: (list[str] | None) = None
) -> None
Logs a model artifact containing the contents inside the ‘path’ to a run and marks it as an output to this run.
Args |
|
path |
(str) A path to the contents of this model, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path |
name |
(str, optional) A name to assign to the model artifact that the file contents will be added to. The string must contain only the following alphanumeric characters: dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified. |
aliases |
(list, optional) Aliases to apply to the created model artifact, defaults to ["latest"] |
Examples:
run.log_model(
path="/local/directory",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.log_model(
path="/local/directory",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
Raises |
|
ValueError |
if name has invalid special characters |
mark_preempting
View source
mark_preempting() -> None
Mark this run as preempting.
Also tells the internal process to immediately report this to server.
project_name
View source
restore
View source
restore(
name: str,
run_path: (str | None) = None,
replace: bool = (False),
root: (str | None) = None
) -> (None | TextIO)
Download the specified file from cloud storage.
File is placed into the current directory or run directory.
By default, will only download the file if it doesn’t already exist.
Args |
|
name |
the name of the file |
run_path |
optional path to a run to pull files from, such as username/project_name/run_id . If wandb.init has not been called, run_path is required. |
replace |
whether to download the file even if it already exists locally |
root |
the directory to download the file to. Defaults to the current directory or the run directory if wandb.init was called. |
Returns |
|
None if it can’t find the file, otherwise a file object open for reading |
|
Raises |
|
wandb.CommError |
if we can’t connect to the wandb backend |
ValueError |
if the file is not found or can’t find run_path |
save
View source
save(
glob_str: (str | os.PathLike | None) = None,
base_path: (str | os.PathLike | None) = None,
policy: PolicyName = "live"
) -> (bool | list[str])
Sync one or more files to W&B.
Relative paths are relative to the current working directory.
A Unix glob, such as"myfiles/*
, is expanded at the time save
is
called regardless of the policy
. In particular, new files are not
picked up automatically.
A base_path
may be provided to control the directory structure of
uploaded files. It should be a prefix of glob_str
, and the directory
structure beneath it is preserved. It’s best understood through
examples:
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
Note: when given an absolute path or glob and no base_path
, one
directory level is preserved as in the example above.
Args |
|
glob_str |
A relative or absolute path or Unix glob. |
base_path |
A path to use to infer a directory structure; see examples. |
policy |
One of live , now , or end . * live: upload the file as it changes, overwriting the previous version * now: upload the file once now * end: upload file when the run ends |
Returns |
|
Paths to the symlinks created for the matched files. For historical reasons, this may return a boolean in legacy code. |
|
status
View source
Get sync info from the internal backend, about the current run’s sync status.
to_html
View source
to_html(
height: int = 420,
hidden: bool = (False)
) -> str
Generate HTML containing an iframe displaying the current run.
unwatch
View source
unwatch(
models: (torch.nn.Module | Sequence[torch.nn.Module] | None) = None
) -> None
Remove pytorch model topology, gradient hooks, and parameter hooks.
Args |
|
models (torch.nn.Module |
Sequence[torch.nn.Module]) : Optional list of pytorch models that have had watch called on them |
upsert_artifact
View source
upsert_artifact(
artifact_or_path: (Artifact | str),
name: (str | None) = None,
type: (str | None) = None,
aliases: (list[str] | None) = None,
distributed_id: (str | None) = None
) -> Artifact
Declare (or append to) a non-finalized artifact as output of a run.
Note that you must call run.finish_artifact() to finalize the artifact.
This is useful when distributed jobs need to all contribute to the same artifact.
Args |
|
artifact_or_path |
(str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact . |
name |
(str, optional) An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified. |
type |
(str) The type of artifact to log, examples include dataset , model |
aliases |
(list, optional) Aliases to apply to this artifact, defaults to ["latest"] |
distributed_id |
(string, optional) Unique string that all distributed jobs share. If None, defaults to the run’s group name. |
Returns |
|
An Artifact object. |
|
use_artifact
View source
use_artifact(
artifact_or_name: (str | Artifact),
type: (str | None) = None,
aliases: (list[str] | None) = None,
use_as: (str | None) = None
) -> Artifact
Declare an artifact as an input to a run.
Call download
or file
on the returned object to get the contents locally.
Args |
|
artifact_or_name |
(str or Artifact) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms: - name:version - name:alias You can also pass an Artifact object created by calling wandb.Artifact |
type |
(str, optional) The type of artifact to use. |
aliases |
(list, optional) Aliases to apply to this artifact |
use_as |
(string, optional) Optional string indicating what purpose the artifact was used with. Will be shown in UI. |
Returns |
|
An Artifact object. |
|
use_model
View source
use_model(
name: str
) -> FilePathStr
Download the files logged in a model artifact ’name’.
Args |
|
name |
(str) A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed with entity/project/. Valid names can be in the following forms: - model_artifact_name:version - model_artifact_name:alias |
Examples:
run.use_model(
name="my_model_artifact:latest",
)
run.use_model(
name="my_project/my_model_artifact:v0",
)
run.use_model(
name="my_entity/my_project/my_model_artifact:<digest>",
)
Invalid usage
run.use_model(
name="my_entity/my_project/my_model_artifact",
)
Raises |
|
AssertionError |
if model artifact ’name’ is of a type that does not contain the substring ‘model’. |
Returns |
|
path |
(str) path to downloaded model artifact files. |
watch
View source
watch(
models: (torch.nn.Module | Sequence[torch.nn.Module]),
criterion: (torch.F | None) = None,
log: (Literal['gradients', 'parameters', 'all'] | None) = "gradients",
log_freq: int = 1000,
idx: (int | None) = None,
log_graph: bool = (False)
) -> None
Hooks into the given PyTorch models to monitor gradients and the model’s computational graph.
This function can track parameters, gradients, or both during training. It should be
extended to support arbitrary machine learning models in the future.
Args |
|
models (Union[torch.nn.Module, Sequence[torch.nn.Module]]) : A single model or a sequence of models to be monitored. criterion (Optional[torch.F]) : The loss function being optimized (optional). log (Optional[Literal["gradients", "parameters", "all"]]) : Specifies whether to log gradients, parameters, or all. Set to None to disable logging. (default="gradients" ) log_freq (int) : Frequency (in batches) to log gradients and parameters. (default=1000 ) idx (Optional[int]) : Index used when tracking multiple models with wandb.watch . (default=None ) log_graph (bool) : Whether to log the model’s computational graph. (default=False ) |
|
Raises |
|
ValueError |
If wandb.init has not been called or if any of the models are not instances of torch.nn.Module . |
__enter__
View source
__exit__
View source
__exit__(
exc_type: type[BaseException],
exc_val: BaseException,
exc_tb: TracebackType
) -> bool
3.13 - save
Sync one or more files to W&B.
save(
glob_str: (str | os.PathLike | None) = None,
base_path: (str | os.PathLike | None) = None,
policy: PolicyName = "live"
) -> (bool | list[str])
Relative paths are relative to the current working directory.
A Unix glob, such as "myfiles/*"
, is expanded at the time save
is
called regardless of the policy
. In particular, new files are not
picked up automatically.
A base_path
may be provided to control the directory structure of
uploaded files. It should be a prefix of glob_str
, and the directory
structure beneath it is preserved. It’s best understood through
examples:
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
Note: when given an absolute path or glob and no base_path
, one
directory level is preserved as in the example above.
Args |
|
glob_str |
A relative or absolute path or Unix glob. |
base_path |
A path to use to infer a directory structure; see examples. |
policy |
One of live , now , or end . * live: upload the file as it changes, overwriting the previous version * now: upload the file once now * end: upload file when the run ends |
Returns |
|
Paths to the symlinks created for the matched files. For historical reasons, this may return a boolean in legacy code. |
|
3.14 - sweep
Initialize a hyperparameter sweep.
sweep(
sweep: Union[dict, Callable],
entity: Optional[str] = None,
project: Optional[str] = None,
prior_runs: Optional[List[str]] = None
) -> str
Search for hyperparameters that optimizes a cost function
of a machine learning model by testing various combinations.
Make note the unique identifier, sweep_id
, that is returned.
At a later step provide the sweep_id
to a sweep agent.
Args |
|
sweep |
The configuration of a hyperparameter search. (or configuration generator). See Sweep configuration structure for information on how to define your sweep. If you provide a callable, ensure that the callable does not take arguments and that it returns a dictionary that conforms to the W&B sweep config spec. |
entity |
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username. |
project |
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’. |
prior_runs |
The run IDs of existing runs to add to this sweep. |
Returns |
|
sweep_id |
str. A unique identifier for the sweep. |
3.15 - wandb_workspaces
Classes
class reports
: Python library for programmatically working with W&B Reports API.
class workspaces
: Python library for programmatically working with W&B Workspace API.
3.15.1 - Reports
module wandb_workspaces.reports.v2
Python library for programmatically working with W&B Reports API.
import wandb_workspaces.reports.v2 as wr
report = wr.Report(
entity="entity",
project="project",
title="An amazing title",
description="A descriptive description.",
)
blocks = [
wr.PanelGrid(
panels=[
wr.LinePlot(x="time", y="velocity"),
wr.ScatterPlot(x="time", y="acceleration"),
]
)
]
report.blocks = blocks
report.save()
class BarPlot
A panel object that shows a 2D bar plot.
Attributes:
title
(Optional[str]): The text that appears at the top of the plot.
metrics
(LList[MetricType]): orientation Literal[“v”, “h”]: The orientation of the bar plot. Set to either vertical (“v”) or horizontal (“h”). Defaults to horizontal (“h”).
range_x
(Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.
title_x
(Optional[str]): The label of the x-axis.
title_y
(Optional[str]): The label of the y-axis.
groupby
(Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.
groupby_aggfunc
(Optional[GroupAgg]): Aggregate runs with specified function. Options include mean
, min
, max
, median
, sum
, samples
, or None
.
groupby_rangefunc
(Optional[GroupArea]): Group runs based on a range. Options include minmax
, stddev
, stderr
, none
, =samples
, or None
.
max_runs_to_show
(Optional[int]): The maximum number of runs to show on the plot.
max_bars_to_show
(Optional[int]): The maximum number of bars to show on the bar plot.
custom_expressions
(Optional[LList[str]]): A list of custom expressions to be used in the bar plot.
legend_template
(Optional[str]): The template for the legend.
font_size
( Optional[FontSize]): The size of the line plot’s font. Options include small
, medium
, large
, auto
, or None
.
line_titles
(Optional[dict]): The titles of the lines. The keys are the line names and the values are the titles.
line_colors
(Optional[dict]): The colors of the lines. The keys are the line names and the values are the colors.
class BlockQuote
A block of quoted text.
Attributes:
text
(str): The text of the block quote.
class CalloutBlock
A block of callout text.
Attributes:
text
(str): The callout text.
class CheckedList
A list of items with checkboxes. Add one or more CheckedListItem
within CheckedList
.
Attributes:
items
(LList[CheckedListItem]): A list of one or more CheckedListItem
objects.
class CheckedListItem
A list item with a checkbox. Add one or more CheckedListItem
within CheckedList
.
Attributes:
text
(str): The text of the list item.
checked
(bool): Whether the checkbox is checked. By default, set to False
.
class CodeBlock
A block of code.
Attributes:
code
(str): The code in the block.
language
(Optional[Language]): The language of the code. Language specified is used for syntax highlighting. By default, set to python
. Options include javascript
, python
, css
, json
, html
, markdown
, yaml
.
class CodeComparer
A panel object that compares the code between two different runs.
Attributes:
diff
(Literal['split', 'unified'])
: How to display code differences. Options include split
and unified
.
class Config
Metrics logged to a run’s config object. Config objects are commonly logged using run.config[name] = ...
or passing a config as a dictionary of key-value pairs, where the key is the name of the metric and the value is the value of that metric.
Attributes:
name
(str): The name of the metric.
class CustomChart
A panel that shows a custom chart. The chart is defined by a weave query.
Attributes:
query
(dict): The query that defines the custom chart. The key is the name of the field, and the value is the query.
chart_name
(str): The title of the custom chart.
chart_fields
(dict): Key-value pairs that define the axis of the plot. Where the key is the label, and the value is the metric.
chart_strings
(dict): Key-value pairs that define the strings in the chart.
classmethod from_table
from_table(
table_name: str,
chart_fields: dict = None,
chart_strings: dict = None
)
Create a custom chart from a table.
Arguments:
table_name
(str): The name of the table.
chart_fields
(dict): The fields to display in the chart.
chart_strings
(dict): The strings to display in the chart.
class Gallery
A block that renders a gallery of reports and URLs.
Attributes:
items
(List[Union[GalleryReport
, GalleryURL
]]): A list of GalleryReport
and GalleryURL
objects.
class GalleryReport
A reference to a report in the gallery.
Attributes:
report_id
(str): The ID of the report.
class GalleryURL
A URL to an external resource.
Attributes:
url
(str): The URL of the resource.
title
(Optional[str]): The title of the resource.
description
(Optional[str]): The description of the resource.
image_url
(Optional[str]): The URL of an image to display.
class GradientPoint
A point in a gradient.
Attributes:
color
: The color of the point.
offset
: The position of the point in the gradient. The value should be between 0 and 100.
class H1
An H1 heading with the text specified.
Attributes:
text
(str): The text of the heading.
collapsed_blocks
(Optional[LList[“BlockTypes”]]): The blocks to show when the heading is collapsed.
class H2
An H2 heading with the text specified.
Attributes:
text
(str): The text of the heading.
collapsed_blocks
(Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.
class H3
An H3 heading with the text specified.
Attributes:
text
(str): The text of the heading.
collapsed_blocks
(Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.
class Heading
class HorizontalRule
HTML horizontal line.
class Image
A block that renders an image.
Attributes:
url
(str): The URL of the image.
caption
(str): The caption of the image. Caption appears underneath the image.
class InlineCode
Inline code. Does not add newline character after code.
Attributes:
text
(str): The code you want to appear in the report.
class InlineLatex
Inline LaTeX markdown. Does not add newline character after the LaTeX markdown.
Attributes:
text
(str): LaTeX markdown you want to appear in the report.
class LatexBlock
A block of LaTeX text.
Attributes:
text
(str): The LaTeX text.
class Layout
The layout of a panel in a report. Adjusts the size and position of the panel.
Attributes:
x
(int): The x position of the panel.
y
(int): The y position of the panel.
w
(int): The width of the panel.
h
(int): The height of the panel.
class LinePlot
A panel object with 2D line plots.
Attributes:
title
(Optional[str]): The text that appears at the top of the plot.
x
(Optional[MetricType]): The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.
y
(LList[MetricType]): One or more metrics logged to your W&B project that the report pulls information from. The metric specified is used for the y-axis.
range_x
(Tuple[float | None
, float | None
]): Tuple that specifies the range of the x-axis.
range_y
(Tuple[float | None
, float | None
]): Tuple that specifies the range of the y-axis.
log_x
(Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.
log_y
(Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.
title_x
(Optional[str]): The label of the x-axis.
title_y
(Optional[str]): The label of the y-axis.
ignore_outliers
(Optional[bool]): If set to True
, do not plot outliers.
groupby
(Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.
groupby_aggfunc
(Optional[GroupAgg]): Aggregate runs with specified function. Options include mean
, min
, max
, median
, sum
, samples
, or None
.
groupby_rangefunc
(Optional[GroupArea]): Group runs based on a range. Options include minmax
, stddev
, stderr
, none
, samples
, or None
.
smoothing_factor
(Optional[float]): The smoothing factor to apply to the smoothing type. Accepted values range between 0 and 1.
smoothing_type Optional[SmoothingType]
: Apply a filter based on the specified distribution. Options include exponentialTimeWeighted
, exponential
, gaussian
, average
, or none
.
smoothing_show_original
(Optional[bool]): If set to True
, show the original data.
max_runs_to_show
(Optional[int]): The maximum number of runs to show on the line plot.
custom_expressions
(Optional[LList[str]]): Custom expressions to apply to the data.
plot_type Optional[LinePlotStyle]
: The type of line plot to generate. Options include line
, stacked-area
, or pct-area
.
font_size Optional[FontSize]
: The size of the line plot’s font. Options include small
, medium
, large
, auto
, or None
.
legend_position Optional[LegendPosition]
: Where to place the legend. Options include north
, south
, east
, west
, or None
.
legend_template
(Optional[str]): The template for the legend.
aggregate
(Optional[bool]): If set to True
, aggregate the data.
xaxis_expression
(Optional[str]): The expression for the x-axis.
legend_fields
(Optional[LList[str]]): The fields to include in the legend.
class Link
A link to a URL.
Attributes:
text
(Union[str, TextWithInlineComments]): The text of the link.
url
(str): The URL the link points to.
class MarkdownBlock
A block of markdown text. Useful if you want to write text that uses common markdown syntax.
Attributes:
text
(str): The markdown text.
class MarkdownPanel
A panel that renders markdown.
Attributes:
markdown
(str): The text you want to appear in the markdown panel.
A panel that displays media files in a grid layout.
Attributes:
num_columns
(Optional[int]): The number of columns in the grid.
media_keys
(LList[str]): A list of media keys that correspond to the media files.
class Metric
A metric to display in a report that is logged in your project.
Attributes:
name
(str): The name of the metric.
class OrderBy
A metric to order by.
Attributes:
name
(str): The name of the metric.
ascending
(bool): Whether to sort in ascending order. By default set to False
.
class OrderedList
A list of items in a numbered list.
Attributes:
items
(LList[str]): A list of one or more OrderedListItem
objects.
class OrderedListItem
A list item in an ordered list.
Attributes:
text
(str): The text of the list item.
class P
A paragraph of text.
Attributes:
text
(str): The text of the paragraph.
class Panel
A panel that displays a visualization in a panel grid.
Attributes:
layout
(Layout): A Layout
object.
class PanelGrid
A grid that consists of runsets and panels. Add runsets and panels with Runset
and Panel
objects, respectively.
Available panels include: LinePlot
, ScatterPlot
, BarPlot
, ScalarChart
, CodeComparer
, ParallelCoordinatesPlot
, ParameterImportancePlot
, RunComparer
, MediaBrowser
, MarkdownPanel
, CustomChart
, WeavePanel
, WeavePanelSummaryTable
, WeavePanelArtifactVersionedFile
.
Attributes:
runsets
(LList[“Runset”]): A list of one or more Runset
objects.
panels
(LList[“PanelTypes”]): A list of one or more Panel
objects.
active_runset
(int): The number of runs you want to display within a runset. By default, it is set to 0.
custom_run_colors
(dict): Key-value pairs where the key is the name of a run and the value is a color specified by a hexadecimal value.
class ParallelCoordinatesPlot
A panel object that shows a parallel coordinates plot.
Attributes:
columns
(LList[ParallelCoordinatesPlotColumn]): A list of one or more ParallelCoordinatesPlotColumn
objects.
title
(Optional[str]): The text that appears at the top of the plot.
gradient
(Optional[LList[GradientPoint]]): A list of gradient points.
font_size
(Optional[FontSize]): The size of the line plot’s font. Options include small
, medium
, large
, auto
, or None
.
class ParallelCoordinatesPlotColumn
A column within a parallel coordinates plot. The order of metric
s specified determine the order of the parallel axis (x-axis) in the parallel coordinates plot.
Attributes:
metric
(str | Config | SummaryMetric): The name of the metric logged to your W&B project that the report pulls information from.
display_name
(Optional[str]): The name of the metric
inverted
(Optional[bool]): Whether to invert the metric.
log
(Optional[bool]): Whether to apply a log transformation to the metric.
class ParameterImportancePlot
A panel that shows how important each hyperparameter is in predicting the chosen metric.
Attributes:
with_respect_to
(str): The metric you want to compare the parameter importance against. Common metrics might include the loss, accuracy, and so forth. The metric you specify must be logged within the project that the report pulls information from.
class Report
An object that represents a W&B Report. Use the returned object’s blocks
attribute to customize your report. Report objects do not automatically save. Use the save()
method to persists changes.
Attributes:
project
(str): The name of the W&B project you want to load in. The project specified appears in the report’s URL.
entity
(str): The W&B entity that owns the report. The entity appears in the report’s URL.
title
(str): The title of the report. The title appears at the top of the report as an H1 heading.
description
(str): A description of the report. The description appears underneath the report’s title.
blocks
(LList[BlockTypes]): A list of one or more HTML tags, plots, grids, runsets, and more.
width
(Literal[‘readable’, ‘fixed’, ‘fluid’]): The width of the report. Options include ‘readable’, ‘fixed’, ‘fluid’.
property url
The URL where the report is hosted. The report URL consists of https://wandb.ai/{entity}/{project_name}/reports/
. Where {entity}
and {project_name}
consists of the entity that the report belongs to and the name of the project, respectively.
classmethod from_url
from_url(url: str, as_model: bool = False)
Load in the report into current environment. Pass in the URL where the report is hosted.
Arguments:
url
(str): The URL where the report is hosted.
as_model
(bool): If True, return the model object instead of the Report object. By default, set to False
.
method save
save(draft: bool = False, clone: bool = False)
Persists changes made to a report object.
method to_html
to_html(height: int = 1024, hidden: bool = False) → str
Generate HTML containing an iframe displaying this report. Commonly used to within a Python notebook.
Arguments:
height
(int): Height of the iframe.
hidden
(bool): If True, hide the iframe. Default set to False
.
class RunComparer
A panel that compares metrics across different runs from the project the report pulls information from.
Attributes:
diff_only
(Optional[Literal["split", True]])
: Display only the difference across runs in a project. You can toggle this feature on and off in the W&B Report UI.
class Runset
A set of runs to display in a panel grid.
Attributes:
entity
(str): An entity that owns or has the correct permissions to the project where the runs are stored.
project
(str): The name of the project were the runs are stored.
name
(str): The name of the run set. Set to Run set
by default.
query
(str): A query string to filter runs.
filters
(Optional[str]): A filter string to filter runs.
groupby
(LList[str]): A list of metric names to group by.
order
(LList[OrderBy]): A list of OrderBy
objects to order by.
custom_run_colors
(LList[OrderBy]): A dictionary mapping run IDs to colors.
class RunsetGroup
UI element that shows a group of runsets.
Attributes:
runset_name
(str): The name of the runset.
keys
(Tuple[RunsetGroupKey, …]): The keys to group by. Pass in one or more RunsetGroupKey
objects to group by.
class RunsetGroupKey
Groups runsets by a metric type and value. Part of a RunsetGroup
. Specify the metric type and value to group by as key-value pairs.
Attributes:
key
(Type[str] | Type[Config] | Type[SummaryMetric] | Type[Metric]): The metric type to group by.
value
(str): The value of the metric to group by.
class ScalarChart
A panel object that shows a scalar chart.
Attributes:
title
(Optional[str]): The text that appears at the top of the plot.
metric
(MetricType): The name of a metric logged to your W&B project that the report pulls information from.
groupby_aggfunc
(Optional[GroupAgg]): Aggregate runs with specified function. Options include mean
, min
, max
, median
, sum
, samples
, or None
.
groupby_rangefunc
(Optional[GroupArea]): Group runs based on a range. Options include minmax
, stddev
, stderr
, none
, samples
, or None
.
custom_expressions
(Optional[LList[str]]): A list of custom expressions to be used in the scalar chart.
legend_template
(Optional[str]): The template for the legend.
font_size Optional[FontSize]
: The size of the line plot’s font. Options include small
, medium
, large
, auto
, or None
.
class ScatterPlot
A panel object that shows a 2D or 3D scatter plot.
Arguments:
title
(Optional[str]): The text that appears at the top of the plot.
x Optional[SummaryOrConfigOnlyMetric]
: The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.
y Optional[SummaryOrConfigOnlyMetric]
: One or more metrics logged to your W&B project that the report pulls information from. Metrics specified are plotted within the y-axis. z Optional[SummaryOrConfigOnlyMetric]:
range_x
(Tuple[float | None
, float | None
]): Tuple that specifies the range of the x-axis.
range_y
(Tuple[float | None
, float | None
]): Tuple that specifies the range of the y-axis.
range_z
(Tuple[float | None
, float | None
]): Tuple that specifies the range of the z-axis.
log_x
(Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.
log_y
(Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.
log_z
(Optional[bool]): Plots the z-coordinates using a base-10 logarithmic scale.
running_ymin
(Optional[bool]): Apply a moving average or rolling mean.
running_ymax
(Optional[bool]): Apply a moving average or rolling mean.
running_ymean
(Optional[bool]): Apply a moving average or rolling mean.
legend_template
(Optional[str]): A string that specifies the format of the legend.
gradient
(Optional[LList[GradientPoint]]): A list of gradient points that specify the color gradient of the plot.
font_size
(Optional[FontSize]): The size of the line plot’s font. Options include small
, medium
, large
, auto
, or None
.
regression
(Optional[bool]): If True
, a regression line is plotted on the scatter plot.
class SoundCloud
A block that renders a SoundCloud player.
Attributes:
html
(str): The HTML code to embed the SoundCloud player.
class Spotify
A block that renders a Spotify player.
Attributes:
spotify_id
(str): The Spotify ID of the track or playlist.
class SummaryMetric
A summary metric to display in a report.
Attributes:
name
(str): The name of the metric.
class TableOfContents
A block that contains a list of sections and subsections using H1, H2, and H3 HTML blocks specified in a report.
A block of text with inline comments.
Attributes:
text
(str): The text of the block.
A block that displays a Twitter feed.
Attributes:
html
(str): The HTML code to display the Twitter feed.
class UnorderedList
A list of items in a bulleted list.
Attributes:
items
(LList[str]): A list of one or more UnorderedListItem
objects.
class UnorderedListItem
A list item in an unordered list.
Attributes:
text
(str): The text of the list item.
class Video
A block that renders a video.
Attributes:
url
(str): The URL of the video.
class WeaveBlockArtifact
A block that shows an artifact logged to W&B. The query takes the form of
project('entity', 'project').artifact('artifact-name')
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
entity
(str): The entity that owns or has the appropriate permissions to the project where the artifact is stored.
project
(str): The project where the artifact is stored.
artifact
(str): The name of the artifact to retrieve.
tab Literal["overview", "metadata", "usage", "files", "lineage"]
: The tab to display in the artifact panel.
class WeaveBlockArtifactVersionedFile
A block that shows a versioned file logged to a W&B artifact. The query takes the form of
project('entity', 'project').artifactVersion('name', 'version').file('file-name')
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
entity
(str): The entity that owns or has the appropriate permissions to the project where the artifact is stored.
project
(str): The project where the artifact is stored.
artifact
(str): The name of the artifact to retrieve.
version
(str): The version of the artifact to retrieve.
file
(str): The name of the file stored in the artifact to retrieve.
class WeaveBlockSummaryTable
A block that shows a W&B Table, pandas DataFrame, plot, or other value logged to W&B. The query takes the form of
project('entity', 'project').runs.summary['value']
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
entity
(str): The entity that owns or has the appropriate permissions to the project where the values are logged.
project
(str): The project where the value is logged in.
table_name
(str): The name of the table, DataFrame, plot, or value.
class WeavePanel
An empty query panel that can be used to display custom content using queries.
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
class WeavePanelArtifact
A panel that shows an artifact logged to W&B.
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
artifact
(str): The name of the artifact to retrieve.
tab Literal["overview", "metadata", "usage", "files", "lineage"]
: The tab to display in the artifact panel.
class WeavePanelArtifactVersionedFile
A panel that shows a versioned file logged to a W&B artifact.
project('entity', 'project').artifactVersion('name', 'version').file('file-name')
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
artifact
(str): The name of the artifact to retrieve.
version
(str): The version of the artifact to retrieve.
file
(str): The name of the file stored in the artifact to retrieve.
class WeavePanelSummaryTable
A panel that shows a W&B Table, pandas DataFrame, plot, or other value logged to W&B. The query takes the form of
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
table_name
(str): The name of the table, DataFrame, plot, or value.
3.15.2 - Workspaces
module wandb_workspaces.workspaces
Python library for programmatically working with W&B Workspace API.
# How to import
import wandb_workspaces.workspaces as ws
# Example of creating a workspace
ws.Workspace(
name="Example W&B Workspace",
entity="entity", # entity that owns the workspace
project="project", # project that the workspace is associated with
sections=[
ws.Section(
name="Validation Metrics",
panels=[
wr.LinePlot(x="Step", y=["val_loss"]),
wr.BarPlot(metrics=["val_accuracy"]),
wr.ScalarChart(metric="f1_score", groupby_aggfunc="mean"),
],
is_open=True,
),
],
)
workspace.save()
class RunSettings
Settings for a run in a runset (left hand bar).
Attributes:
color
(str): The color of the run in the UI. Can be hex (#ff0000), css color (red), or rgb (rgb(255, 0, 0))
disabled
(bool): Whether the run is deactivated (eye closed in the UI). Default is set to False
.
class RunsetSettings
Settings for the runset (the left bar containing runs) in a workspace.
Attributes:
query
(str): A query to filter the runset (can be a regex expr, see next param).
regex_query
(bool): Controls whether the query (above) is a regex expr. Default is set to False
.
filters
(LList[expr.FilterExpr])
: A list of filters to apply to the runset. Filters are AND’d together. See FilterExpr for more information on creating filters.
groupby
(LList[expr.MetricType])
: A list of metrics to group by in the runset. Set to Metric
, Summary
, Config
, Tags
, or KeysInfo
.
order
(LList[expr.Ordering])
: A list of metrics and ordering to apply to the runset.
run_settings
(Dict[str, RunSettings])
: A dictionary of run settings, where the key is the run’s ID and the value is a RunSettings object.
class Section
Represents a section in a workspace.
Attributes:
name
(str): The name/title of the section.
panels
(LList[PanelTypes])
: An ordered list of panels in the section. By default, first is top-left and last is bottom-right.
is_open
(bool): Whether the section is open or closed. Default is closed.
layout_settings
(Literal[
standard,
custom])
: Settings for panel layout in the section.
panel_settings
: Panel-level settings applied to all panels in the section, similar to WorkspaceSettings
for a Section
.
class SectionLayoutSettings
Panel layout settings for a section, typically seen at the top right of the section of the W&B App Workspace UI.
Attributes:
layout
(Literal[
standard,
custom])
: The layout of panels in the section. standard
follows the default grid layout, custom
allows per per-panel layouts controlled by the individual panel settings.
columns
(int): In a standard layout, the number of columns in the layout. Default is 3.
rows
(int): In a standard layout, the number of rows in the layout. Default is 2.
class SectionPanelSettings
Panel settings for a section, similar to WorkspaceSettings
for a section.
Settings applied here can be overrided by more granular Panel settings in this priority: Section < Panel.
Attributes:
x_axis
(str): X-axis metric name setting. By default, set to Step
.
x_min Optional[float]
: Minimum value for the x-axis.
x_max Optional[float]
: Maximum value for the x-axis.
smoothing_type
(Literal[’exponentialTimeWeighted’, ’exponential’, ‘gaussian’, ‘average’, ’none’]): Smoothing type applied to all panels.
smoothing_weight
(int): Smoothing weight applied to all panels.
class Workspace
Represents a W&B workspace, including sections, settings, and config for run sets.
Attributes:
entity
(str): The entity this workspace will be saved to (usually user or team name).
project
(str): The project this workspace will be saved to.
name
: The name of the workspace.
sections
(LList[Section])
: An ordered list of sections in the workspace. The first section is at the top of the workspace.
settings
(WorkspaceSettings)
: Settings for the workspace, typically seen at the top of the workspace in the UI.
runset_settings
(RunsetSettings)
: Settings for the runset (the left bar containing runs) in a workspace.
property url
The URL to the workspace in the W&B app.
classmethod from_url
Get a workspace from a URL.
method save
Save the current workspace to W&B.
Returns:
Workspace
: The updated workspace with the saved internal name and ID.
method save_as_new_view
Save the current workspace as a new view to W&B.
Returns:
Workspace
: The updated workspace with the saved internal name and ID.
class WorkspaceSettings
Settings for the workspace, typically seen at the top of the workspace in the UI.
This object includes settings for the x-axis, smoothing, outliers, panels, tooltips, runs, and panel query bar.
Settings applied here can be overrided by more granular Section and Panel settings in this priority: Workspace < Section < Panel
Attributes:
x_axis
(str): X-axis metric name setting.
x_min
(Optional[float])
: Minimum value for the x-axis.
x_max
(Optional[float])
: Maximum value for the x-axis.
smoothing_type
(Literal['exponentialTimeWeighted', 'exponential', 'gaussian', 'average', 'none'])
: Smoothing type applied to all panels.
smoothing_weight
(int): Smoothing weight applied to all panels.
ignore_outliers
(bool): Ignore outliers in all panels.
sort_panels_alphabetically
(bool): Sorts panels in all sections alphabetically.
group_by_prefix
(Literal[
first,
last])
: Group panels by the first or up to last prefix (first or last). Default is set to last
.
remove_legends_from_panels
(bool): Remove legends from all panels.
tooltip_number_of_runs
(Literal[
default,
all,
none])
: The number of runs to show in the tooltip.
tooltip_color_run_names
(bool): Whether to color run names in the tooltip to match the runset (True) or not (False). Default is set to True
.
max_runs
(int): The maximum number of runs to show per panel (this will be the first 10 runs in the runset).
point_visualization_method
(Literal[
line,
point,
line_point])
: The visualization method for points.
panel_search_query
(str): The query for the panel search bar (can be a regex expression).
auto_expand_panel_search_results
(bool): Whether to auto expand the panel search results.
3.16 - watch
Hooks into the given PyTorch models to monitor gradients and the model’s computational graph.
watch(
models: (torch.nn.Module | Sequence[torch.nn.Module]),
criterion: (torch.F | None) = None,
log: (Literal['gradients', 'parameters', 'all'] | None) = "gradients",
log_freq: int = 1000,
idx: (int | None) = None,
log_graph: bool = (False)
) -> None
This function can track parameters, gradients, or both during training. It should be
extended to support arbitrary machine learning models in the future.
Args |
|
models (Union[torch.nn.Module, Sequence[torch.nn.Module]]) |
A single model or a sequence of models to be monitored. |
criterion (Optional[torch.F]) |
The loss function being optimized (optional). |
log (Optional[Literal[ “gradients”, “parameters”, “all”, None]]) |
Specifies whether to log gradients, parameters, or all. Set to None to disable logging. (default="gradients" ) |
log_freq (int) |
Frequency (in batches) to log gradients and parameters. (default=1000 ) |
idx (Optional[int]) |
Index used when tracking multiple models with wandb.watch . (default=None) |
log_graph (bool) |
Whether to log the model’s computational graph. (default=False ) |
Raises |
|
ValueError |
If wandb.init has not been called or if any of the models are not instances of torch.nn.Module . |
4 - Query Expression Language
Use the query expressions to select and aggregate data across runs and projects.
Learn more about Query Panels here: https://docs.wandb.ai/guides/models/app/features/panels/query-panel
Data Types
4.1 - artifact
Chainable Ops
artifact-link
Returns the url for an artifact
Return Value
The url for an artifact
artifact-name
Returns the name of the artifact
Return Value
The name of the artifact
artifact-versions
Returns the versions of the artifact
Return Value
The versions of the artifact
List Ops
artifact-link
Returns the url for an artifact
Return Value
The url for an artifact
artifact-name
Returns the name of the artifact
Return Value
The name of the artifact
artifact-versions
Returns the versions of the artifact
Return Value
The versions of the artifact
4.2 - artifactType
Chainable Ops
artifactType-artifactVersions
Returns the artifactVersions of all artifacts of the artifactType
Return Value
The artifactVersions of all artifacts of the artifactType
artifactType-artifacts
Returns the artifacts of the artifactType
Return Value
The artifacts of the artifactType
artifactType-name
Returns the name of the artifactType
Return Value
The name of the artifactType
List Ops
artifactType-artifactVersions
Returns the artifactVersions of all artifacts of the artifactType
Return Value
The artifactVersions of all artifacts of the artifactType
artifactType-artifacts
Returns the artifacts of the artifactType
Return Value
The artifacts of the artifactType
artifactType-name
Returns the name of the artifactType
Return Value
The name of the artifactType
4.3 - artifactVersion
Chainable Ops
artifactVersion-aliases
Returns the aliases for an artifactVersion
Return Value
The aliases for an artifactVersion
artifactVersion-createdAt
Returns the datetime at which the artifactVersion was created
Return Value
The datetime at which the artifactVersion was created
artifactVersion-file
Returns the file of the artifactVersion for the given path
Return Value
The file of the artifactVersion for the given path
artifactVersion-files
Returns the list of files of the artifactVersion
Return Value
The list of files of the artifactVersion
artifactVersion-link
Returns the url for an artifactVersion
Return Value
The url for an artifactVersion
Returns the artifactVersion metadata dictionary
Return Value
The artifactVersion metadata dictionary
artifactVersion-name
Returns the name of the artifactVersion
Return Value
The name of the artifactVersion
artifactVersion-size
Returns the size of the artifactVersion
Return Value
The size of the artifactVersion
artifactVersion-usedBy
Returns the runs that use the artifactVersion
Return Value
The runs that use the artifactVersion
artifactVersion-versionId
Returns the versionId of the artifactVersion
Return Value
The versionId of the artifactVersion
List Ops
artifactVersion-aliases
Returns the aliases for an artifactVersion
Return Value
The aliases for an artifactVersion
artifactVersion-createdAt
Returns the datetime at which the artifactVersion was created
Return Value
The datetime at which the artifactVersion was created
artifactVersion-file
Returns the file of the artifactVersion for the given path
Return Value
The file of the artifactVersion for the given path
artifactVersion-files
Returns the list of files of the artifactVersion
Return Value
The list of files of the artifactVersion
artifactVersion-link
Returns the url for an artifactVersion
Return Value
The url for an artifactVersion
Returns the artifactVersion metadata dictionary
Return Value
The artifactVersion metadata dictionary
artifactVersion-name
Returns the name of the artifactVersion
Return Value
The name of the artifactVersion
artifactVersion-size
Returns the size of the artifactVersion
Return Value
The size of the artifactVersion
artifactVersion-usedBy
Returns the runs that use the artifactVersion
Return Value
The runs that use the artifactVersion
artifactVersion-versionId
Returns the versionId of the artifactVersion
Return Value
The versionId of the artifactVersion
4.4 - audio-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.5 - bokeh-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.6 - boolean
Chainable Ops
and
Returns the logical and
of the two values
Argument |
|
lhs |
First binary value |
rhs |
Second binary value |
Return Value
The logical and
of the two values
or
Returns the logical or
of the two values
Argument |
|
lhs |
First binary value |
rhs |
Second binary value |
Return Value
The logical or
of the two values
boolean-not
Returns the logical inverse of the value
Argument |
|
bool |
The boolean value |
Return Value
The logical inverse of the value
boolean-not
Returns the logical inverse of the value
Argument |
|
bool |
The boolean value |
Return Value
The logical inverse of the value
List Ops
and
Returns the logical and
of the two values
Argument |
|
lhs |
First binary value |
rhs |
Second binary value |
Return Value
The logical and
of the two values
or
Returns the logical or
of the two values
Argument |
|
lhs |
First binary value |
rhs |
Second binary value |
Return Value
The logical or
of the two values
boolean-not
Returns the logical inverse of the value
Argument |
|
bool |
The boolean value |
Return Value
The logical inverse of the value
boolean-not
Returns the logical inverse of the value
Argument |
|
bool |
The boolean value |
Return Value
The logical inverse of the value
4.7 - entity
Chainable Ops
entity-link
Returns the link of the entity
Return Value
The link of the entity
entity-name
Returns the name of the entity
Return Value
The name of the entity
List Ops
entity-link
Returns the link of the entity
Return Value
The link of the entity
entity-name
Returns the name of the entity
Return Value
The name of the entity
4.8 - file
Chainable Ops
file-contents
Returns the contents of the file
Return Value
The contents of the file
file-digest
Returns the digest of the file
Return Value
The digest of the file
file-size
Returns the size of the file
Return Value
The size of the file
file-table
Returns the contents of the file as a table
Return Value
The contents of the file as a table
List Ops
file-contents
Returns the contents of the file
Return Value
The contents of the file
file-digest
Returns the digest of the file
Return Value
The digest of the file
file-size
Returns the size of the file
Return Value
The size of the file
file-table
Returns the contents of the file as a table
Return Value
The contents of the file as a table
4.9 - float
Chainable Ops
number-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
number-modulo
Divide a number by another and return remainder
Return Value
Modulo of two numbers
number-mult
Multiply two numbers
Return Value
Product of two numbers
number-powBinary
Raise a number to an exponent
Return Value
The base numbers raised to nth power
number-add
Add two numbers
Return Value
Sum of two numbers
number-sub
Subtract a number from another
Return Value
Difference of two numbers
number-div
Divide a number by another
Return Value
Quotient of two numbers
number-less
Check if a number is less than another
Return Value
Whether the first number is less than the second
number-lessEqual
Check if a number is less than or equal to another
Return Value
Whether the first number is less than or equal to the second
number-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
number-greater
Check if a number is greater than another
Return Value
Whether the first number is greater than the second
number-greaterEqual
Check if a number is greater than or equal to another
Return Value
Whether the first number is greater than or equal to the second
number-negate
Negate a number
Argument |
|
val |
Number to negate |
Return Value
A number
number-toString
Convert a number to a string
Argument |
|
in |
Number to convert |
Return Value
String representation of the number
number-toTimestamp
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Argument |
|
val |
Number to convert to a timestamp |
Return Value
Timestamp
number-abs
Calculates the absolute value of a number
Return Value
The absolute value of the number
List Ops
number-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
number-modulo
Divide a number by another and return remainder
Return Value
Modulo of two numbers
number-mult
Multiply two numbers
Return Value
Product of two numbers
number-powBinary
Raise a number to an exponent
Return Value
The base numbers raised to nth power
number-add
Add two numbers
Return Value
Sum of two numbers
number-sub
Subtract a number from another
Return Value
Difference of two numbers
number-div
Divide a number by another
Return Value
Quotient of two numbers
number-less
Check if a number is less than another
Return Value
Whether the first number is less than the second
number-lessEqual
Check if a number is less than or equal to another
Return Value
Whether the first number is less than or equal to the second
number-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
number-greater
Check if a number is greater than another
Return Value
Whether the first number is greater than the second
number-greaterEqual
Check if a number is greater than or equal to another
Return Value
Whether the first number is greater than or equal to the second
number-negate
Negate a number
Argument |
|
val |
Number to negate |
Return Value
A number
numbers-argmax
Finds the index of maximum number
Argument |
|
numbers |
list of numbers to find the index of maximum number |
Return Value
Index of maximum number
numbers-argmin
Finds the index of minimum number
Argument |
|
numbers |
list of numbers to find the index of minimum number |
Return Value
Index of minimum number
numbers-avg
Average of numbers
Argument |
|
numbers |
list of numbers to average |
Return Value
Average of numbers
numbers-max
Maximum number
Return Value
Maximum number
numbers-min
Minimum number
Return Value
Minimum number
numbers-stddev
Standard deviation of numbers
Argument |
|
numbers |
list of numbers to calculate the standard deviation |
Return Value
Standard deviation of numbers
numbers-sum
Sum of numbers
Argument |
|
numbers |
list of numbers to sum |
Return Value
Sum of numbers
number-toString
Convert a number to a string
Argument |
|
in |
Number to convert |
Return Value
String representation of the number
number-toTimestamp
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Argument |
|
val |
Number to convert to a timestamp |
Return Value
Timestamp
number-abs
Calculates the absolute value of a number
Return Value
The absolute value of the number
4.10 - html-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.11 - image-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.12 - int
Chainable Ops
number-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
number-modulo
Divide a number by another and return remainder
Return Value
Modulo of two numbers
number-mult
Multiply two numbers
Return Value
Product of two numbers
number-powBinary
Raise a number to an exponent
Return Value
The base numbers raised to nth power
number-add
Add two numbers
Return Value
Sum of two numbers
number-sub
Subtract a number from another
Return Value
Difference of two numbers
number-div
Divide a number by another
Return Value
Quotient of two numbers
number-less
Check if a number is less than another
Return Value
Whether the first number is less than the second
number-lessEqual
Check if a number is less than or equal to another
Return Value
Whether the first number is less than or equal to the second
number-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
number-greater
Check if a number is greater than another
Return Value
Whether the first number is greater than the second
number-greaterEqual
Check if a number is greater than or equal to another
Return Value
Whether the first number is greater than or equal to the second
number-negate
Negate a number
Argument |
|
val |
Number to negate |
Return Value
A number
number-toString
Convert a number to a string
Argument |
|
in |
Number to convert |
Return Value
String representation of the number
number-toTimestamp
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Argument |
|
val |
Number to convert to a timestamp |
Return Value
Timestamp
number-abs
Calculates the absolute value of a number
Return Value
The absolute value of the number
List Ops
number-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
number-modulo
Divide a number by another and return remainder
Return Value
Modulo of two numbers
number-mult
Multiply two numbers
Return Value
Product of two numbers
number-powBinary
Raise a number to an exponent
Return Value
The base numbers raised to nth power
number-add
Add two numbers
Return Value
Sum of two numbers
number-sub
Subtract a number from another
Return Value
Difference of two numbers
number-div
Divide a number by another
Return Value
Quotient of two numbers
number-less
Check if a number is less than another
Return Value
Whether the first number is less than the second
number-lessEqual
Check if a number is less than or equal to another
Return Value
Whether the first number is less than or equal to the second
number-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
number-greater
Check if a number is greater than another
Return Value
Whether the first number is greater than the second
number-greaterEqual
Check if a number is greater than or equal to another
Return Value
Whether the first number is greater than or equal to the second
number-negate
Negate a number
Argument |
|
val |
Number to negate |
Return Value
A number
numbers-argmax
Finds the index of maximum number
Argument |
|
numbers |
list of numbers to find the index of maximum number |
Return Value
Index of maximum number
numbers-argmin
Finds the index of minimum number
Argument |
|
numbers |
list of numbers to find the index of minimum number |
Return Value
Index of minimum number
numbers-avg
Average of numbers
Argument |
|
numbers |
list of numbers to average |
Return Value
Average of numbers
numbers-max
Maximum number
Return Value
Maximum number
numbers-min
Minimum number
Return Value
Minimum number
numbers-stddev
Standard deviation of numbers
Argument |
|
numbers |
list of numbers to calculate the standard deviation |
Return Value
Standard deviation of numbers
numbers-sum
Sum of numbers
Argument |
|
numbers |
list of numbers to sum |
Return Value
Sum of numbers
number-toString
Convert a number to a string
Argument |
|
in |
Number to convert |
Return Value
String representation of the number
number-toTimestamp
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Argument |
|
val |
Number to convert to a timestamp |
Return Value
Timestamp
number-abs
Calculates the absolute value of a number
Return Value
The absolute value of the number
4.13 - joined-table
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
joinedtable-file
Returns the file of a joined-table
Argument |
|
joinedTable |
The joined-table |
Return Value
The file of a joined-table
joinedtable-rows
Returns the rows of a joined-table
Argument |
|
joinedTable |
The joined-table |
leftOuter |
Whether to include rows from the left table that do not have a matching row in the right table |
rightOuter |
Whether to include rows from the right table that do not have a matching row in the left table |
Return Value
The rows of the joined-table
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.14 - molecule-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.15 - number
Chainable Ops
number-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
number-modulo
Divide a number by another and return remainder
Return Value
Modulo of two numbers
number-mult
Multiply two numbers
Return Value
Product of two numbers
number-powBinary
Raise a number to an exponent
Return Value
The base numbers raised to nth power
number-add
Add two numbers
Return Value
Sum of two numbers
number-sub
Subtract a number from another
Return Value
Difference of two numbers
number-div
Divide a number by another
Return Value
Quotient of two numbers
number-less
Check if a number is less than another
Return Value
Whether the first number is less than the second
number-lessEqual
Check if a number is less than or equal to another
Return Value
Whether the first number is less than or equal to the second
number-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
number-greater
Check if a number is greater than another
Return Value
Whether the first number is greater than the second
number-greaterEqual
Check if a number is greater than or equal to another
Return Value
Whether the first number is greater than or equal to the second
number-negate
Negate a number
Argument |
|
val |
Number to negate |
Return Value
A number
number-toString
Convert a number to a string
Argument |
|
in |
Number to convert |
Return Value
String representation of the number
number-toTimestamp
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Argument |
|
val |
Number to convert to a timestamp |
Return Value
Timestamp
number-abs
Calculates the absolute value of a number
Return Value
The absolute value of the number
List Ops
number-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
number-modulo
Divide a number by another and return remainder
Return Value
Modulo of two numbers
number-mult
Multiply two numbers
Return Value
Product of two numbers
number-powBinary
Raise a number to an exponent
Return Value
The base numbers raised to nth power
number-add
Add two numbers
Return Value
Sum of two numbers
number-sub
Subtract a number from another
Return Value
Difference of two numbers
number-div
Divide a number by another
Return Value
Quotient of two numbers
number-less
Check if a number is less than another
Return Value
Whether the first number is less than the second
number-lessEqual
Check if a number is less than or equal to another
Return Value
Whether the first number is less than or equal to the second
number-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
number-greater
Check if a number is greater than another
Return Value
Whether the first number is greater than the second
number-greaterEqual
Check if a number is greater than or equal to another
Return Value
Whether the first number is greater than or equal to the second
number-negate
Negate a number
Argument |
|
val |
Number to negate |
Return Value
A number
numbers-argmax
Finds the index of maximum number
Argument |
|
numbers |
list of numbers to find the index of maximum number |
Return Value
Index of maximum number
numbers-argmin
Finds the index of minimum number
Argument |
|
numbers |
list of numbers to find the index of minimum number |
Return Value
Index of minimum number
numbers-avg
Average of numbers
Argument |
|
numbers |
list of numbers to average |
Return Value
Average of numbers
numbers-max
Maximum number
Return Value
Maximum number
numbers-min
Minimum number
Return Value
Minimum number
numbers-stddev
Standard deviation of numbers
Argument |
|
numbers |
list of numbers to calculate the standard deviation |
Return Value
Standard deviation of numbers
numbers-sum
Sum of numbers
Argument |
|
numbers |
list of numbers to sum |
Return Value
Sum of numbers
number-toString
Convert a number to a string
Argument |
|
in |
Number to convert |
Return Value
String representation of the number
number-toTimestamp
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Argument |
|
val |
Number to convert to a timestamp |
Return Value
Timestamp
number-abs
Calculates the absolute value of a number
Return Value
The absolute value of the number
4.16 - object3D-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.17 - partitioned-table
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
partitionedtable-file
Returns the file of a partitioned-table
Argument |
|
partitionedTable |
The partitioned-table |
Return Value
file of the partitioned-table
partitionedtable-rows
Returns the rows of a partitioned-table
Argument |
|
partitionedTable |
The partitioned-table to get rows from |
Return Value
Rows of the partitioned-table
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.18 - project
Chainable Ops
project-artifact
Returns the artifact for a given name within a project
Return Value
The artifact for a given name within a project
project-artifactType
Returns the [artifactType](artifact-type.md for a given name within a project
Argument |
|
project |
A project |
artifactType |
The name of the [artifactType](artifact-type.md |
Return Value
The [artifactType](artifact-type.md for a given name within a project
project-artifactTypes
Returns the [artifactTypes](artifact-type.md for a project
Return Value
The [artifactTypes](artifact-type.md for a project
project-artifactVersion
Returns the [artifactVersion](artifact-version.md for a given name and version within a project
Argument |
|
project |
A project |
artifactName |
The name of the [artifactVersion](artifact-version.md |
artifactVersionAlias |
The version alias of the [artifactVersion](artifact-version.md |
Return Value
The [artifactVersion](artifact-version.md for a given name and version within a project
project-createdAt
Returns the creation time of the project
Return Value
The creation time of the project
project-name
Returns the name of the project
Return Value
The name of the project
project-runs
Returns the runs from a project
Return Value
The runs from a project
List Ops
project-artifact
Returns the artifact for a given name within a project
Return Value
The artifact for a given name within a project
project-artifactType
Returns the [artifactType](artifact-type.md for a given name within a project
Argument |
|
project |
A project |
artifactType |
The name of the [artifactType](artifact-type.md |
Return Value
The [artifactType](artifact-type.md for a given name within a project
project-artifactTypes
Returns the [artifactTypes](artifact-type.md for a project
Return Value
The [artifactTypes](artifact-type.md for a project
project-artifactVersion
Returns the [artifactVersion](artifact-version.md for a given name and version within a project
Argument |
|
project |
A project |
artifactName |
The name of the [artifactVersion](artifact-version.md |
artifactVersionAlias |
The version alias of the [artifactVersion](artifact-version.md |
Return Value
The [artifactVersion](artifact-version.md for a given name and version within a project
project-createdAt
Returns the creation time of the project
Return Value
The creation time of the project
project-name
Returns the name of the project
Return Value
The name of the project
project-runs
Returns the runs from a project
Return Value
The runs from a project
4.19 - pytorch-model-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
4.20 - run
Chainable Ops
run-config
Returns the config typedDict of the run
Return Value
The config typedDict of the run
run-createdAt
Returns the created at datetime of the run
Return Value
The created at datetime of the run
run-heartbeatAt
Returns the last heartbeat datetime of the run
Return Value
The last heartbeat datetime of the run
run-history
Returns the log history of the run
Return Value
The log history of the run
run-jobType
Returns the job type of the run
Return Value
The job type of the run
run-loggedArtifactVersion
Returns the artifactVersion logged by the run for a given name and alias
Return Value
The artifactVersion logged by the run for a given name and alias
run-loggedArtifactVersions
Returns all of the artifactVersions logged by the run
Return Value
The artifactVersions logged by the run
run-name
Returns the name of the run
Return Value
The name of the run
run-runtime
Returns the runtime in seconds of the run
Return Value
The runtime in seconds of the run
run-summary
Returns the summary typedDict of the run
Return Value
The summary typedDict of the run
run-usedArtifactVersions
Returns all of the artifactVersions used by the run
Return Value
The artifactVersions used by the run
run-user
Returns the user of the run
Return Value
The user of the run
List Ops
run-config
Returns the config typedDict of the run
Return Value
The config typedDict of the run
run-createdAt
Returns the created at datetime of the run
Return Value
The created at datetime of the run
run-heartbeatAt
Returns the last heartbeat datetime of the run
Return Value
The last heartbeat datetime of the run
run-history
Returns the log history of the run
Return Value
The log history of the run
run-jobType
Returns the job type of the run
Return Value
The job type of the run
run-loggedArtifactVersion
Returns the artifactVersion logged by the run for a given name and alias
Return Value
The artifactVersion logged by the run for a given name and alias
run-loggedArtifactVersions
Returns all of the artifactVersions logged by the run
Return Value
The artifactVersions logged by the run
run-name
Returns the name of the run
Return Value
The name of the run
run-runtime
Returns the runtime in seconds of the run
Return Value
The runtime in seconds of the run
run-summary
Returns the summary typedDict of the run
Return Value
The summary typedDict of the run
run-usedArtifactVersions
Returns all of the artifactVersions used by the run
Return Value
The artifactVersions used by the run
4.21 - string
Chainable Ops
string-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
string-add
Concatenates two strings
Return Value
The concatenated string
string-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
string-append
Appends a suffix to a string
Argument |
|
str |
The string to append to |
suffix |
The suffix to append |
Return Value
The string with the suffix appended
string-contains
Checks if a string contains a substring
Argument |
|
str |
The string to check |
sub |
The substring to check for |
Return Value
Whether the string contains the substring
string-endsWith
Checks if a string ends with a suffix
Argument |
|
str |
The string to check |
suffix |
The suffix to check for |
Return Value
Whether the string ends with the suffix
string-findAll
Finds all occurrences of a substring in a string
Argument |
|
str |
The string to find occurrences of the substring in |
sub |
The substring to find |
Return Value
The list of indices of the substring in the string
string-isAlnum
Checks if a string is alphanumeric
Argument |
|
str |
The string to check |
Return Value
Whether the string is alphanumeric
string-isAlpha
Checks if a string is alphabetic
Argument |
|
str |
The string to check |
Return Value
Whether the string is alphabetic
string-isNumeric
Checks if a string is numeric
Argument |
|
str |
The string to check |
Return Value
Whether the string is numeric
string-lStrip
Strip leading whitespace
Argument |
|
str |
The string to strip. |
Return Value
The stripped string.
string-len
Returns the length of a string
Argument |
|
str |
The string to check |
Return Value
The length of the string
string-lower
Converts a string to lowercase
Argument |
|
str |
The string to convert to lowercase |
Return Value
The lowercase string
string-partition
Partitions a string into a list of the strings
Argument |
|
str |
The string to split |
sep |
The separator to split on |
Return Value
A list of strings: the string before the separator, the separator, and the string after the separator
string-prepend
Prepends a prefix to a string
Argument |
|
str |
The string to prepend to |
prefix |
The prefix to prepend |
Return Value
The string with the prefix prepended
string-rStrip
Strip trailing whitespace
Argument |
|
str |
The string to strip. |
Return Value
The stripped string.
string-replace
Replaces all occurrences of a substring in a string
Argument |
|
str |
The string to replace contents of |
sub |
The substring to replace |
newSub |
The substring to replace the old substring with |
Return Value
The string with the replacements
string-slice
Slices a string into a substring based on beginning and end indices
Argument |
|
str |
The string to slice |
begin |
The beginning index of the substring |
end |
The ending index of the substring |
Return Value
The substring
string-split
Splits a string into a list of strings
Argument |
|
str |
The string to split |
sep |
The separator to split on |
Return Value
The list of strings
string-startsWith
Checks if a string starts with a prefix
Argument |
|
str |
The string to check |
prefix |
The prefix to check for |
Return Value
Whether the string starts with the prefix
string-strip
Strip whitespace from both ends of a string.
Argument |
|
str |
The string to strip. |
Return Value
The stripped string.
string-upper
Converts a string to uppercase
Argument |
|
str |
The string to convert to uppercase |
Return Value
The uppercase string
string-levenshtein
Calculates the Levenshtein distance between two strings.
Return Value
The Levenshtein distance between the two strings.
List Ops
string-notEqual
Determines inequality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are not equal.
string-add
Concatenates two strings
Return Value
The concatenated string
string-equal
Determines equality of two values.
Argument |
|
lhs |
The first value to compare. |
rhs |
The second value to compare. |
Return Value
Whether the two values are equal.
string-append
Appends a suffix to a string
Argument |
|
str |
The string to append to |
suffix |
The suffix to append |
Return Value
The string with the suffix appended
string-contains
Checks if a string contains a substring
Argument |
|
str |
The string to check |
sub |
The substring to check for |
Return Value
Whether the string contains the substring
string-endsWith
Checks if a string ends with a suffix
Argument |
|
str |
The string to check |
suffix |
The suffix to check for |
Return Value
Whether the string ends with the suffix
string-findAll
Finds all occurrences of a substring in a string
Argument |
|
str |
The string to find occurrences of the substring in |
sub |
The substring to find |
Return Value
The list of indices of the substring in the string
string-isAlnum
Checks if a string is alphanumeric
Argument |
|
str |
The string to check |
Return Value
Whether the string is alphanumeric
string-isAlpha
Checks if a string is alphabetic
Argument |
|
str |
The string to check |
Return Value
Whether the string is alphabetic
string-isNumeric
Checks if a string is numeric
Argument |
|
str |
The string to check |
Return Value
Whether the string is numeric
string-lStrip
Strip leading whitespace
Argument |
|
str |
The string to strip. |
Return Value
The stripped string.
string-len
Returns the length of a string
Argument |
|
str |
The string to check |
Return Value
The length of the string
string-lower
Converts a string to lowercase
Argument |
|
str |
The string to convert to lowercase |
Return Value
The lowercase string
string-partition
Partitions a string into a list of the strings
Argument |
|
str |
The string to split |
sep |
The separator to split on |
Return Value
A list of strings: the string before the separator, the separator, and the string after the separator
string-prepend
Prepends a prefix to a string
Argument |
|
str |
The string to prepend to |
prefix |
The prefix to prepend |
Return Value
The string with the prefix prepended
string-rStrip
Strip trailing whitespace
Argument |
|
str |
The string to strip. |
Return Value
The stripped string.
string-replace
Replaces all occurrences of a substring in a string
Argument |
|
str |
The string to replace contents of |
sub |
The substring to replace |
newSub |
The substring to replace the old substring with |
Return Value
The string with the replacements
string-slice
Slices a string into a substring based on beginning and end indices
Argument |
|
str |
The string to slice |
begin |
The beginning index of the substring |
end |
The ending index of the substring |
Return Value
The substring
string-split
Splits a string into a list of strings
Argument |
|
str |
The string to split |
sep |
The separator to split on |
Return Value
The list of strings
string-startsWith
Checks if a string starts with a prefix
Argument |
|
str |
The string to check |
prefix |
The prefix to check for |
Return Value
Whether the string starts with the prefix
string-strip
Strip whitespace from both ends of a string.
Argument |
|
str |
The string to strip. |
Return Value
The stripped string.
string-upper
Converts a string to uppercase
Argument |
|
str |
The string to convert to uppercase |
Return Value
The uppercase string
string-levenshtein
Calculates the Levenshtein distance between two strings.
Return Value
The Levenshtein distance between the two strings.
4.22 - table
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
table-rows
Returns the rows of a table
Return Value
The rows of the table
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
table-rows
Returns the rows of a table
Return Value
The rows of the table
4.23 - user
Chainable Ops
user-username
Returns the username of the user
Return Value
The username of the user
List Ops
user-username
Returns the username of the user
Return Value
The username of the user
4.24 - video-file
Chainable Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset
List Ops
asset-file
Returns the file of the asset
Return Value
The file of the asset