Define sweep configuration
A W&B Sweep combines a strategy for exploring hyperparameter values with the code that evaluates them. The strategy can be as simple as trying every option or as complex as Bayesian Optimization and Hyperband (BOHB).
Define your strategy in the form of a sweep configuration. Specify the configuration either in a:
- Python nested dictionary data structure if you use a Jupyter Notebook or Python script.
- YAML file if you use the command line (CLI).
The following code snippets demonstrate examples of how to define a Sweep configuration in a Jupyter Notebook or Python script or within a YAML file. Configuration keys are defined in detail in subsequent sections.
- Python script or Jupyter Notebook
- YAML
sweep_configuration = {
"method": "bayes",
"name": "sweep",
"metric": {"goal": "minimize", "name": "validation_loss"},
"parameters": {
"batch_size": {"values": [16, 32, 64]},
"epochs": {"values": [5, 10, 15]},
"lr": {"max": 0.1, "min": 0.0001},
},
}
program: train.py
method: bayes
metric:
name: validation_loss
goal: minimize
parameters:
learning_rate:
min: 0.0001
max: 0.1
optimizer:
values: ["adam", "sgd"]
- Ensure that you log (
wandb.log
) the exact metric name that you defined the sweep to optimize within your Python script or Jupyter Notebook. - You cannot change the Sweep configuration once you start the W&B Sweep agent.
For example, suppose you want W&B Sweeps to maximize the validation accuracy during training. Within your Python script you store the validation accuracy in a variable val_loss
. In your YAML configuration file you define this as:
metric:
goal: maximize
name: val_loss
You must log the variable val_loss
(in this example) within your Python script or Jupyter Notebook to W&B.
wandb.log({"val_loss": validation_loss})
Defining the metric in the sweep configuration is only required when using the bayes method for the sweep.
Sweep configuration structure
Sweep configurations are nested; keys can have, as their values, further keys. The top-level keys are listed and briefly described below, and then detailed in the following section.
Top-Level Key | Description |
---|---|
program | (required) Training script to run. |
method | (required) Specify the search strategy. |
parameters | (required) Specify parameters bounds to search. |
name | The name of the sweep, displayed in the W&B UI. |
description | Text description of the sweep. |
metric | Specify the metric to optimize (only used by certain search strategies and stopping criteria). |
early_terminate | Specify any early stopping criteria. |
command | Specify command structure for invoking and passing arguments to the training script. |
project | Specify the project for this sweep. |
entity | Specify the entity for this sweep. |
run_cap | Specify a maximum number of runs in a sweep. |
Search type methods
The following list describes hyperparameter search methods. Specify the search strategy with the method
:
grid
– Iterate over every combination of hyperparameter values. Can be computationally costly.random
– Choose a random set of hyperparameter values on each iteration based on provided distributions.bayes
– Create a probabilistic model of a metric score as a function of the hyperparameters, and choose parameters with high probability of improving the metric. Bayesian hyperparameter search method uses a Gaussian Process to model the relationship between the parameters and the model metric and chooses parameters to optimize the probability of improvement. This strategy requires themetric
key to be specified. Works well for small numbers of continuous parameters but scales poorly.
- Random search
- Grid search
- Bayes search
method: random
method: grid
method: bayes
metric:
name: val_loss
goal: minimize
Random and Bayesian searches will run forever -- until you stop the process from the command line, within your python script, or the UI. Grid search will also run forever if it searches within in a continuous search space.
Configuration keys
method
Specify the search strategy with the method
key in the sweep configuration.
method | Description |
---|---|
grid | Grid search iterates over all possible combinations of parameter values. |
random | Random search chooses a random set of values on each iteration. |
bayes | Our Bayesian hyperparameter search method uses a Gaussian Process to model the relationship between the parameters and the model metric and chooses parameters to optimize the probability of improvement. This strategy requires the metric key to be specified. |
parameters
Describe the hyperparameters to explore during the sweep. For each hyperparameter, specify the name and the possible values as a list of constants (for any method
) or specify a distribution
for random
or bayes
.
Values | Description |
---|---|
values | Specifies all valid values for this hyperparameter. Compatible with grid . |
value | Specifies the single valid value for this hyperparameter. Compatible with grid . |
distribution | (str ) Selects a distribution from the distribution table below. If not specified, will default to categorical if values is set, to int_uniform if max and min are set to integers, to uniform if max and min are set to floats, or toconstant if value is set. |
probabilities | Specify the probability of selecting each element of values when using random . |
min , max | (int or float ) Maximum and minimum values. If int , for int_uniform -distributed hyperparameters. If float , for uniform -distributed hyperparameters. |
mu | (float ) Mean parameter for normal - or lognormal -distributed hyperparameters. |
sigma | (float ) Standard deviation parameter for normal - or lognormal -distributed hyperparameters. |
q | (float ) Quantization step size for quantized hyperparameters. |
parameters | Nest other parameters inside a root level parameter. |
Examples
- single value
- multiple values
- probabilities
- distribution
- nested
parameter_name:
value: 1.618
parameter_name:
values:
- 8
- 6
- 7
- 5
- 3
- 0
- 9
parameter_name:
values: [1, 2, 3, 4, 5]
probabilities: [0.1, 0.2, 0.1, 0.25, 0.35]
parameter_name:
distribution: normal
mu: 100
sigma: 10
optimizer:
parameters:
learning_rate:
values: [0.01, 0.001]
momentum:
value: 0.9
distribution
Specify how to distribute values if you choose a random (random)
or Bayesian (bayes)
search method.
Value | Description |
---|---|
constant | Constant distribution. Must specify value . |
categorical | Categorical distribution. Must specify values . |
int_uniform | Discrete uniform distribution on integers. Must specify max and min as integers. |
uniform | Continuous uniform distribution. Must specify max and min as floats. |
q_uniform | Quantized uniform distribution. Returns round(X / q) * q where X is uniform. q defaults to 1 . |
log_uniform | Log-uniform distribution. Returns a value X between exp(min) and exp(max) such that the natural logarithm is uniformly distributed between min and max . |
log_uniform_values | Log-uniform distribution. Returns a value X between min and max such that log( X) is uniformly distributed between log(min) and log(max) . |
q_log_uniform | Quantized log uniform. Returns round(X / q) * q where X is log_uniform . q defaults to 1 . |
q_log_uniform_values | Quantized log uniform. Returns round(X / q) * q where X is log_uniform_values . q defaults to 1 . |
inv_log_uniform | Inverse log uniform distribution. Returns X , where log(1/X) is uniformly distributed between min and max . |
inv_log_uniform_values | Inverse log uniform distribution. Returns X , where log(1/X) is uniformly distributed between log(1/max) and log(1/min) . |
normal | Normal distribution. Return value is normally-distributed with mean mu (default 0 ) and standard deviation sigma (default 1 ). |
q_normal | Quantized normal distribution. Returns round(X / q) * q where X is normal . Q defaults to 1. |
log_normal | Log normal distribution. Returns a value X such that the natural logarithm log(X) is normally distributed with mean mu (default 0 ) and standard deviation sigma (default 1 ). |
q_log_normal | Quantized log normal distribution. Returns round(X / q) * q where X is log_normal . q defaults to 1 . |
Examples
- constant
- categorical
- uniform
- q_uniform
parameter_name:
distribution: constant
value: 2.71828
parameter_name:
distribution: categorical
values:
- elu
- relu
- gelu
- selu
- relu
- prelu
- lrelu
- rrelu
- relu6
parameter_name:
distribution: uniform
min: 0
max: 1
parameter_name:
distribution: q_uniform
min: 0
max: 256
q: 1
metric
Describes the metric to optimize. This metric should be logged explicitly to W&B by your training script.
Key | Description |
---|---|
name | Name of the metric to optimize. |
goal | Either minimize or maximize (Default is minimize ). |
target | Goal value for the metric you're optimizing. When any run in the sweep achieves that target value, the sweep's state will be set to finished . This means all agents with active runs will finish those jobs, but no new runs will be launched in the sweep. |
For example, if you want to minimize the validation loss of your model:
# model training code that returns validation loss as valid_loss
wandb.log({"val_loss": valid_loss})
Examples
- Maximize
- Minimize
- Target
metric:
name: val_acc
goal: maximize
metric:
name: val_loss
goal: minimize
metric:
name: val_acc
goal: maximize
target: 0.95
The metric you optimize must be a top-level metric.
Do not log the metric for your sweep inside of a sub-directory. In the proceeding code example, we want to log the validation loss ("loss": val_loss
). First we define it in a dictionary. However, the dictionary passed to wandb.log
does not specify the key-value pair to track.
val_metrics = {
"loss": val_loss,
"acc": val_acc
}
# Incorrect. Dictionary key-value paired is not provided.
wandb.log({"val_loss", val_metrics})
Instead, log the metric at the top level. For example, after you create a dictionary, specify the key-value pair when you pass the dictionary to the wandb.log
method:
val_metrics = {
"loss": val_loss,
"acc": val_acc
}
wandb.log({"val_loss", val_metrics["loss"]})
early_terminate
Early termination is an optional feature that speeds up hyperparameter search by stopping poorly-performing runs. When the early stopping is triggered, the agent stops the current run and gets the next set of hyperparameters to try.
Key | Description |
---|---|
type | Specify the stopping algorithm |
We support the following stopping algorithm(s):
type | Description |
---|---|
hyperband | Use the hyperband method. |
hyperband
Hyperband stopping evaluates if a program should be stopped or permitted to continue at one or more pre-set iteration counts, called "brackets". When a run reaches a bracket, its metric value is compared to all previous reported metric values and the W&B Run is terminated if its value is too high (when the goal is minimization) or low (when the goal is maximization).
Brackets are based on the number of logged iterations. The number of brackets corresponds to the number of times you log the metric you are optimizing. The iterations can correspond to steps, epochs, or something in between. The numerical value of the step counter is not used in bracket calculations.
Specify either min_iter
or max_iter
to create a bracket schedule.
Key | Description |
---|---|
min_iter | Specify the iteration for the first bracket |
max_iter | Specify the maximum number of iterations. |
s | Specify the total number of brackets (required for max_iter ) |
eta | Specify the bracket multiplier schedule (default: 3 ). |
strict | Enable 'strict' mode that prunes runs aggressively, more closely following the original Hyperband paper. Defaults to false. |
The hyperband early terminator checks what W&B Runs to terminate once every few minutes. The end run timestamp might differ from the specified brackets if your run or iteration are short.
Examples
- Hyperband (min_iter)
- Hyperband (max_iter)
early_terminate:
type: hyperband
min_iter: 3
The brackets for this example are: [3, 3*eta, 3*eta*eta, 3*eta*eta*eta]
, which equals [3, 9, 27, 81]
.
early_terminate:
type: hyperband
max_iter: 27
s: 2
The brackets for this example are [27/eta, 27/eta/eta]
, which equals [9, 3]
.
command
- UNIX
- Windows
/usr/bin/env python train.py --param1=value1 --param2=value2
python train.py --param1=value1 --param2=value2
On UNIX systems, /usr/bin/env
ensures the right Python interpreter is chosen based on the environment.
The format and contents can be modified by specifying values under the command
key. Fixed components of the command, such as filenames, can be included directly (see examples below).
We support the following macros for variable components of the command:
Command Macro | Description |
---|---|
${env} | /usr/bin/env on UNIX systems, omitted on Windows. |
${interpreter} | Expands to python . |
${program} | Training script filename specified by the sweep configuration program key. |
${args} | Hyperparameters and their values in the form --param1=value1 --param2=value2 . |
${args_no_boolean_flags} | Hyperparameters and their values in the form --param1=value1 except boolean parameters are in the form --boolean_flag_param when True and omitted when False . |
${args_no_hyphens} | Hyperparameters and their values in the form param1=value1 param2=value2 . |
${args_json} | Hyperparameters and their values encoded as JSON. |
${args_json_file} | The path to a file containing the hyperparameters and their values encoded as JSON. |
${envvar} | A way to pass environment variables. ${envvar:MYENVVAR} expands to the value of MYENVVAR environment variable. |
The default command format is defined as:
command:
- ${env}
- ${interpreter}
- ${program}
- ${args}
Examples
- Set python interpreter
- Add extra parameters
- Omit arguments
- Hydra
Remove the {$interpreter}
macro and provide a value explicitly in order to hardcode the python interpreter. For example, the following code snippet demonstrates how to do this:
command:
- ${env}
- python3
- ${program}
- ${args}
To add extra command line arguments not specified by sweep configuration parameters:
command:
- ${env}
- ${interpreter}
- ${program}
- "--config"
- "your-training-config.json"
- ${args}
If your program does not use argument parsing you can avoid passing arguments all together and take advantage of wandb.init
picking up sweep parameters into wandb.config
automatically:
command:
- ${env}
- ${interpreter}
- ${program}
You can change the command to pass arguments the way tools like Hydra expect. See Hydra with W&B for more information.
command:
- ${env}
- ${interpreter}
- ${program}
- ${args_no_hyphens}