W&B Quickstart
3 minute read
Install W&B and start tracking your machine learning experiments in minutes.
Sign up and create an API key
An API key authenticates your machine to W&B. You can generate an API key from your user profile.
- Click your user profile icon in the upper right corner.
- Select User Settings, then scroll to the API Keys section.
- Click Reveal. Copy the displayed API key. To hide the API key, reload the page.
Install the wandb
library and log in
To install the wandb
library locally and log in:
-
Set the
WANDB_API_KEY
environment variable to your API key.export WANDB_API_KEY=<your_api_key>
-
Install the
wandb
library and log in.pip install wandb wandb login
pip install wandb
import wandb
wandb.login()
!pip install wandb
import wandb
wandb.login()
Start a run and track hyperparameters
Initialize a W&B Run object in your Python script or notebook with wandb.init()
and pass a dictionary to the config
parameter with key-value pairs of hyperparameter names and values:
run = wandb.init(
# Set the project where this run will be logged
project="my-awesome-project",
# Track hyperparameters and run metadata
config={
"learning_rate": 0.01,
"epochs": 10,
},
)
A run is the basic building block of W&B. You will use them often to track metrics, create logs, and more.
Put it all together
Putting it all together, your training script might look similar to the following code example. The highlighted code shows W&B-specific code. Note that we added code that mimics machine learning training.
# train.py
import wandb
import random # for demo script
# highlight-next-line
wandb.login()
epochs = 10
lr = 0.01
# highlight-start
run = wandb.init(
# Set the project where this run will be logged
project="my-awesome-project",
# Track hyperparameters and run metadata
config={
"learning_rate": lr,
"epochs": epochs,
},
)
# highlight-end
offset = random.random() / 5
print(f"lr: {lr}")
# simulating a training run
for epoch in range(2, epochs):
acc = 1 - 2**-epoch - random.random() / epoch - offset
loss = 2**-epoch + random.random() / epoch + offset
print(f"epoch={epoch}, accuracy={acc}, loss={loss}")
# highlight-next-line
wandb.log({"accuracy": acc, "loss": loss})
# run.log_code()
That’s it. Navigate to the W&B App at https://wandb.ai/home to view how the metrics we logged with W&B (accuracy and loss) improved during each training step.

The image above (click to expand) shows the loss and accuracy that was tracked from each time we ran the script above. Each run object that was created is show within the Runs column. Each run name is randomly generated.
What’s next?
Explore the rest of the W&B ecosystem.
- Check out W&B Integrations to learn how to integrate W&B with your ML framework such as PyTorch, ML library such as Hugging Face, or ML service such as SageMaker.
- Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with W&B Reports.
- Create W&B Artifacts to track datasets, models, dependencies, and results through each step of your machine learning pipeline.
- Automate hyperparameter search and explore the space of possible models with W&B Sweeps.
- Understand your datasets, visualize model predictions, and share insights in a central dashboard.
- Navigate to W&B AI Academy and learn about LLMs, MLOps and W&B Models from hands-on courses.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.