Use Weights & Biases Sweeps to automate hyperparameter search and explore the space of possible models. Create a sweep with a few lines of code. Sweeps combines the benefits of automated hyperparameter search with our visualization-rich, interactive experiment tracking. Pick from popular search methods such as Bayesian, grid search, and random to search the hyperparameter space. Scale and parallelize Sweep jobs across one or more machines.
How it works
There are two components to Weights & Biases Sweeps: a controller and one or more agents. The controller picks out new hyperparameter combinations. Typically the Sweep server is managed on the Weights & Biases server.
Agents query the Weights & Biases server for hyperparameters and use them to run model training. The training results are then reported back to the Sweep server. Agents can run one or more processes on one or more machines. The flexibility of agents to run multiple processes across multiple machines makes it easy to parallelize and scale Sweeps. For more information on how to scale sweeps, see Parallelize agents.
Create a W&B Sweep with the following steps:
- Add W&B to your code: In your Python script, add a couple lines of code to log hyperparameters and output metrics from your script. See Add W&B to your code for more information.
- Define the sweep configuration: Define the variables and ranges to sweep over. Pick a search strategy— we support grid, random, and Bayesian search, plus techniques for faster iterations like early stopping. See Define sweep configuration for more information.
- Initialize sweep: Start the Sweep server. We host this central controller and coordinate between the agents that execute the sweep. See Initialize sweeps for more information.
- Start sweep: Run a single-line command on each machine you'd like to use to train models in the sweep. The agents ask the central sweep server what hyperparameters to try next, and then they execute the runs. See Start sweep agents for more information.
- Visualize results (optional): Open our live dashboard to see all your results in one central place.
How to get started
Depending on your use case, explore the following resources to get started with Weights & Biases Sweeps:
- If this is your first time hyperparameter tuning with Weights & Biases Sweeps, we recommend you read the Quickstart. The Quickstart walks you through setting up your first W&B Sweep.
- Explore topics about Sweeps in the Weights and Biases Developer Guide such as:
- Try our Organizing Hyperparameter Sweeps in PyTorch Google Colab Jupyter notebook for an example of how to create sweeps using the PyTorch framework in a Jupyter notebook.
- Explore a curated list of Sweep experiments that explore hyperparameter optimization with W&B Sweeps. Results are stored in W&B Reports.
- Read the Weights & Biases SDK Reference Guide.
For a step-by-step video, see: Tune Hyperparameters Easily with W&B Sweeps.