Skip to main content


Weights & Biases has 2 OpenAI integrations

  1. OpenAI Python SDK API: Log requests, responses, token counts and model metadata with 1 line of code for all OpenAI models

  2. OpenAI GPT-3 Fine-tuning: Log your GPT-3 fine-tuning metrics and configuration to Weights & Biases to analyse and understand the performance of your newly fine-tuned models.

Log OpenAI API calls in 1 line of code

Try in a Colab Notebook here →

With just 1 line of code you can now automatically log inputs and outputs from the OpenAI Python SDK to Weights & Biases!

Once you start logging your API inputs and outputs you can quickly evaluate the performance of difference prompts, compare different model settings (such as temperature), and track other usage metrics such as token usage.

To get started, pip install the wandb library, then follow the steps below:

1. Import autolog and initialise it

First, import autolog from wandb.integration.openai and initialise it.

import os
import openai
from wandb.integration.openai import autolog


You can optionally pass a dictionary with argument that wandb.init() accepts to autolog. This includes a project name, team name, entity, and more. For more information about wandb.init, see the API Reference Guide.

2. Call the OpenAI API

Each call you make to the OpenAI API will now be logged to Weights & Biases automatically.

os.environ["OPENAI_API_KEY"] = "XXX"

chat_request_kwargs = dict(
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers"},
{"role": "user", "content": "Where was it played?"},
response = openai.ChatCompletion.create(**chat_request_kwargs)

3. View your OpenAI API inputs and responses

Click on the Weights & Biases run link generated by autolog in step 1. This will redirect you to your project workspace in the W&B App.

Select a run you created to view the trace table, trace timeline and the model architecture of the OpenAI LLM used.

4. Disable autolog

We recommend that you call disable() to close all W&B processes when you are finished using the OpenAI API.


Now your inputs and completions will be logged to Weights & Biases, ready for analysis or to be shared with colleagues.

Log OpenAI fine-tunes to W&B

If you use OpenAI's API to fine-tune GPT-3, you can now use the W&B integration to track experiments, models, and datasets in your central dashboard.

All it takes is one line: openai wandb sync

✨ Check out interactive examples

🎉 Sync your fine-tunes with one line!

Make sure you are using latest version of openai and wandb.

$ pip install --upgrade openai wandb

Then sync your results from the command line or from your script.

$ # one line command
$ openai wandb sync

$ # passing optional parameters
$ openai wandb sync --help

We scan for new completed fine-tunes and automatically add them to your dashboard.

In addition your training and validation files are logged and versioned, as well as details of your fine-tune results. This let you interactively explore your training and validation data.

⚙️ Optional arguments

-i ID, --id IDThe id of the fine-tune (optional)
-n N, --n_fine_tunes NNumber of most recent fine-tunes to log when an id is not provided. By default, every fine-tune is synced.
--project PROJECTName of the project where you're sending runs. By default, it is "GPT-3".
--entity ENTITYUsername or team name where you're sending runs. By default, your default entity is used, which is usually your username.
--forceForces logging and overwrite existing wandb run of the same fine-tune.
**kwargs_wandb_initIn python, any additional argument is directly passed to wandb.init()

🔍 Inspect sample predictions

Use Tables to better visualize sample predictions and compare models.

Create a new run:

run = wandb.init(project="GPT-3", job_type="eval")

Retrieve a model id for inference.

You can use automatically logged artifacts to retrieve your latest model:

artifact_job = run.use_artifact("ENTITY/PROJECT/fine_tune_details:latest")
fine_tuned_model = artifact_job.metadata["fine_tuned_model"]

You can also retrieve your validation file:

artifact_valid = run.use_artifact("ENTITY/PROJECT/FILENAME:latest")
valid_file = artifact_valid.get_path("FILENAME").download()

Perform some inferences using OpenAI API:

# perform inference and record results
my_prompts = ["PROMPT_1", "PROMPT_2"]
results = []
for prompt in my_prompts:
res = openai.Completion.create(model=fine_tuned_model,

Log your results with a Table:

table = wandb.Table(columns=['prompt', 'completion'],
data=list(zip(my_prompts, results)))

❓Frequently Asked Questions

How do I share runs with my team?

Sync all your runs to your team account with:

$ openai wandb sync --entity MY_TEAM_ACCOUNT

How can I organize my runs?

Your runs are automatically organized and can be filtered/sorted based on any configuration parameter such as job type, base model, learning rate, training filename and any other hyper-parameter.

In addition, you can rename your runs, add notes or create tags to group them.

Once you’re satisfied, you can save your workspace and use it to create report, importing data from your runs and saved artifacts (training/validation files).

How can I access my fine-tune details?

Fine-tune details are logged to W&B as artifacts and can be accessed with:

import wandb

artifact_job ='USERNAME/PROJECT/job_details:VERSION')

where VERSION is either:

  • a version number such as v2
  • the fine-tune id such as ft-xxxxxxxxx
  • an alias added automatically such as latest or manually

You can then access fine-tune details through artifact_job.metadata. For example, the fine-tuned model can be retrieved with artifact_job.metadata["fine_tuned_model"].

What if a fine-tune was not synced successfully?

You can always call again openai wandb sync and we will re-sync any run that was not synced successfully.

If needed, you can call openai wandb sync --id fine_tune_id --force to force re-syncing a specific fine-tune.

Can I track my datasets with W&B?

Yes, you can integrate your entire pipeline to W&B through Artifacts, including creating your dataset, splitting it, training your models and evaluating them!

This will allow complete traceability of your models.

📚 Resources

Was this page helpful?👍👎