Documentation
Search…
Hyperparameter Tuning
Hyperparameter search and model optimization with W&B Sweeps
Use Weights & Biases Sweeps to automate hyperparameter optimization and explore the space of possible models.
Get started with Sweeps quickly with our video tutorial and Colab notebook.

Benefits of using W&B Sweeps

  1. 1.
    Quick setup: Get going with just a few lines of code. You can launch a sweep across dozens of machines, and it's just as easy as starting a sweep on your laptop.
  2. 2.
    Transparent: We cite all the algorithms we're using, and our code is open source.
  3. 3.
    Powerful: Our sweeps are completely customizable and configurable.

Common Use Cases

  1. 1.
    Explore: Efficiently sample the space of hyperparameter combinations to discover promising regions and build an intuition about your model.
  2. 2.
    Optimize: Use sweeps to find a set of hyperparameters with optimal performance.
  3. 3.
    k-fold cross validation: Here's a brief code example of k-fold cross validation with W&B Sweeps.

Approach

  1. 1.
    Add wandb: In your Python script, add a couple lines of code to log hyperparameters and output metrics from your script. Get started now →
  2. 2.
    Write config: Define the variables and ranges to sweep over. Pick a search strategy— we support grid, random, and Bayesian search, plus techniques for faster iterations like early stopping. Check out some example configs here.
  3. 3.
    Initialize sweep: Launch the sweep server. We host this central controller and coordinate between the agents that execute the sweep.
  4. 4.
    Launch agent(s): Run a single-line command on each machine you'd like to use to train models in the sweep. The agents ask the central sweep server what hyperparameters to try next, and then they execute the runs.
  5. 5.
    Visualize results: Open our live dashboard to see all your results in one central place.
Last modified 2mo ago