Documentation
Search…
Sweeps Quickstart
Start from any machine learning model and get a parallel hyperparameter sweep running in minutes.
Want to see a working example? Here's example code and an example dashboard.
Draw insights from large hyperparameter tuning experiments with interactive dashboards.
Trying to quickly generate a sweep based on runs you've already logged? Check out this guide.

1. Set up wandb

Set up your account

  1. 1.
    Start with a W&B account. Create one now →
  2. 2.
    Go to your project folder in your terminal and install our library: pip install wandb
  3. 3.
    Inside your project folder, log in to W&B: wandb login

Set up your Python training script

Trying to run a hyperparameter sweep from a Jupyter notebook? You want these instructions.
  1. 1.
    Import our library wandb.
  2. 2.
    Pass the hyperparameter values to wandb.init to populate wandb.config.
  3. 3.
    Use the values in the config to build your model and execute training.
  4. 4.
    Log metrics to see them in the live dashboard.
See the code snippets below for several ways to set hyperparameter values so that training scripts can be run stand-alone or as part of a sweep.
Command Line Arguments
In-line Dictionary
config.py File
train.py
1
import argparse
2
import wandb
3
4
# Build your ArgumentParser however you like
5
parser = setup_parser()
6
7
# Get the hyperparameters
8
args = parser.parse_args()
9
10
# Pass them to wandb.init
11
wandb.init(config=args)
12
# Access all hyperparameter values through wandb.config
13
config = wandb.config
14
15
# Set up your model
16
model = make_model(config)
17
18
# Log metrics inside your training loop
19
for epoch in range(config["epochs"]):
20
val_acc, val_loss = model.fit()
21
metrics = {"validation_accuracy": val_acc,
22
"validation_loss": val_loss}
23
wandb.log(metrics)
Copied!
train.py
1
import wandb
2
3
# Set up your default hyperparameters
4
hyperparameter_defaults = dict(
5
channels=[16, 32],
6
batch_size=100,
7
learning_rate=0.001,
8
optimizer="adam",
9
epochs=2,
10
)
11
12
# Pass your defaults to wandb.init
13
wandb.init(config=hyperparameter_defaults)
14
# Access all hyperparameter values through wandb.config
15
config = wandb.config
16
17
# Set up your model
18
model = make_model(config)
19
20
# Log metrics inside your training loop
21
for epoch in range(config["epochs"]):
22
val_acc, val_loss = model.fit()
23
metrics = {"validation_accuracy": val_acc,
24
"validation_loss": val_loss}
25
wandb.log(metrics)
Copied!
train.py
1
import wandb
2
3
import config # python file with default hyperparameters
4
5
# Set up your default hyperparameters
6
hyperparameters = config.config
7
8
# Pass them wandb.init
9
wandb.init(config=hyperparameters)
10
# Access all hyperparameter values through wandb.config
11
config = wandb.config
12
13
# Set up your model
14
model = make_model(config)
15
16
# Log metrics inside your training loop
17
for epoch in range(config["epochs"]):
18
val_acc, val_loss = model.fit()
19
metrics = {"validation_accuracy": val_acc,
20
"validation_loss": val_loss}
21
wandb.log(metrics)
Copied!

2. Configure your sweep

Set up a YAML file to specify the hyperparameters you wish to sweep over, along with the structure of the sweep like the training script to call, the search strategy and stopping criteria to use, etcetera.
Here are some resources on configuring sweeps:
  1. 1.
    Example YAML files : an example script and several different YAML files
  2. 2.
    Configuration: full specs to set up your sweep config
  3. 3.
    Jupyter Notebook: set up your sweep config with a Python dictionary instead of a YAML file
  4. 4.
    Generate config from UI: take an existing W&B project and generate a config file
  5. 5.
    Feed in prior runs: take previous runs and add them to a new sweep
Here's an example sweep config file called sweep.yaml:
sweep.yaml
1
program: train.py
2
method: bayes
3
metric:
4
name: validation_loss
5
goal: minimize
6
parameters:
7
learning_rate:
8
min: 0.0001
9
max: 0.1
10
optimizer:
11
values: ["adam", "sgd"]
Copied!
If you specify a metric to optimize, make sure you're logging it. In this example, I have validation_loss in my config file, so I have to log that exact metric name in my script:
wandb.log({"validation_loss": val_loss})
This example configuration will use a Bayesian search method to choose sets of hyperparameter values to pass as command line arguments to the train.py script on each step. The hyperparameters are also accessible via wandb.config after wandb.init is called.
If you're using argparse in your script, we recommend that you use underscores in your variable names instead of hyphens.

3. Initialize a sweep

After you've set up a sweep config file at sweep.yaml, run this command to get started:
1
wandb sweep sweep.yaml
Copied!
This command will print out a sweep ID, which includes the entity name and project name. Copy that to use in the next step!

4. Launch agent(s)

On each machine or within each process that you'd like to contribute to the sweep, start an "agent". Each agent will poll the central W&B sweep server you launched with wandb sweep for the next set of hyperparameters to run. You'll want to use the same sweep ID for all agents who are participating in the same sweep.
In a shell on your own machine, run the wandb agent command:
1
wandb agent your-sweep-id
Copied!

5. Visualize results

Open your project to see your live results in the sweep dashboard. With just a few clicks, construct rich, interactive charts like parallel coordinates plots, parameter importance analyses, and more.

6. Stop the agent

From the terminal, hit Ctrl+c to stop the run that the sweep agent is currently running. To kill the agent, hit Ctrl+c again after the run is stopped.
Last modified 6mo ago