Hyperparameter search and model optimization with W&B Sweeps
Use Weights & Biases Sweeps to automate hyperparameter search and explore the space of possible models. Create a sweep with a few lines of code. Sweeps combines the benefits of automated hyperparameter search with our visualization-rich, interactive experiment tracking. Pick from popular search methods such as Bayesian, grid search, and random to search the hyperparameter space. Scale and parallelize Sweep jobs across one or more machines.
Draw insights from large hyperparameter tuning experiments with interactive dashboards.
There are two components to Weights & Biases Sweeps: a controller and one or more agents. The controller picks out new hyperparameter combinations. Typically the Sweep server is managed on the Weights & Biases server.
Agents query the Weights & Biases server for hyperparameters and use them to run model training. The training results are then reported back to the Sweep server. Agents can run one or more processes on one or more machines. The flexibility of agents to run multiples processes across multiples machines makes it easy to parallelize and scale Sweeps. For more information on how to scale sweeps, see Parallelize agents.
Create a W&B Sweep with the following steps:
- 2.Define the sweep configuration: Define the variables and ranges to sweep over. Pick a search strategy— we support grid, random, and Bayesian search, plus techniques for faster iterations like early stopping. See Define sweep configuration for more information.
- 4.Start sweep: Run a single-line command on each machine you'd like to use to train models in the sweep. The agents ask the central sweep server what hyperparameters to try next, and then they execute the runs. See Start sweep agents for more information.
- 5.Visualize results (optional): Open our live dashboard to see all your results in one central place.
Depending on your use case, explore the following resources to get started with Weights & Biases Sweeps: