Skip to main content

LightGBM

The wandb library includes a special callback for LightGBM. It's also easy to use the generic logging features of Weights & Biases to track large experiments, like hyperparameter sweeps.

from wandb.integration.lightgbm import wandb_callback, log_summary
import lightgbm as lgb

# Log metrics to W&B
gbm = lgb.train(..., callbacks=[wandb_callback()])

# Log feature importance plot and upload model checkpoint to W&B
log_summary(gbm, save_model_checkpoint=True)
info

Looking for working code examples? Check out our repository of examples on GitHub.

Tuning your hyperparameters with Sweepsโ€‹

Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes Sweeps, a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments.

info

To learn more about these tools and see an example of how to use Sweeps with XGBoost, check out this interactive Colab notebook.

tl;dr: trees outperform linear learners on this classification dataset.

Was this page helpful?๐Ÿ‘๐Ÿ‘Ž