メインコンテンツまでスキップ

OpenAI Fine-Tuning

With Weights & Biases you can log your OpenAI ChatGPT-3.5 or GPT-4 model's fine-tuning metrics and configuration to Weights & Biases to analyse and understand the performance of your newly fine-tuned models and share the results with your colleagues.

Sync your OpenAI Fine-Tuning Results in 1 Line

If you use OpenAI's API to fine-tune OpenAI models, you can now use the W&B integration to track experiments, models, and datasets in your central dashboard.

openai wandb sync

Check out interactive examples

Sync your fine-tunes with one line

Make sure you are using latest version of openai and wandb.

pip install --upgrade openai wandb

Then sync your results from the command line or from your script.

# one line command
openai wandb sync

# passing optional parameters
openai wandb sync --help

When you sync your results, wandb checks OpenAI for newly completed fine-tunes and automatically adds them to your dashboard.

Optional arguments

ArgumentDescription
-i ID, --id IDThe id of the fine-tune (optional)
-n N, --n_fine_tunes NNumber of most recent fine-tunes to log when an id is not provided. By default, every fine-tune is synced.
--project PROJECTName of the Weights & Biases project where you're sending runs. By default, it is "OpenAI-Fine-Tune".
--entity ENTITYWeights & Biases Username or team name where you're sending runs. By default, your default entity is used, which is usually your username.
--forceForces logging and overwrite existing wandb run of the same fine-tune.
--legacyLog results from the legacy OpenAI GPT-3 fine-tune api.
**kwargs_wandb_initIn python, any additional argument is directly passed to wandb.init()

Inspect sample predictions

Use Tables to better visualize sample predictions and compare models.

Create a new run:

run = wandb.init(project="OpenAI-Fine-Tune", job_type="eval")

Retrieve a model id for inference.

You can use automatically logged artifacts to retrieve your latest model:

ft_artifact = run.use_artifact("ENTITY/PROJECT/fine_tune_details:latest")
fine_tuned_model = ft_artifact.metadata["fine_tuned_model"]

You can also retrieve your validation file:

artifact_valid = run.use_artifact("ENTITY/PROJECT/FILENAME:latest")
valid_file = artifact_valid.get_path("FILENAME").download()

Perform some inferences using OpenAI API:

# perform inference and record results
my_prompts = ["PROMPT_1", "PROMPT_2"]
results = []
for prompt in my_prompts:
res = openai.ChatCompletion.create(model=fine_tuned_model, prompt=prompt, ...)
results.append(res["choices"][0]["text"])

Log your results with a Table:

table = wandb.Table(
columns=["prompt", "completion"], data=list(zip(my_prompts, results))
)

Frequently Asked Questions

How do I share my fine-tune resutls with my team in W&B?

Sync all your runs to your team account with:

openai wandb sync --entity MY_TEAM_ACCOUNT

How can I organize my runs?

Your W&B runs are automatically organized and can be filtered/sorted based on any configuration parameter such as job type, base model, learning rate, training filename and any other hyper-parameter.

In addition, you can rename your runs, add notes or create tags to group them.

Once you’re satisfied, you can save your workspace and use it to create report, importing data from your runs and saved artifacts (training/validation files).

How can I access my fine-tune details?

Fine-tune details are logged to W&B as artifacts and can be accessed with:

import wandb

ft_artifact = wandb.run.use_artifact("USERNAME/PROJECT/job_details:VERSION")

where VERSION is either:

  • a version number such as v2
  • the fine-tune id such as ft-xxxxxxxxx
  • an alias added automatically such as latest or manually

You can then access fine-tune details through artifact_job.metadata. For example, the fine-tuned model can be retrieved with artifact_job.metadata["fine_tuned_model"].

What if a fine-tune was not synced successfully?

You can always call again openai wandb sync and we will re-sync any run that was not synced successfully.

If needed, you can call openai wandb sync --id fine_tune_id --force to force re-syncing a specific fine-tune.

Can I track my datasets with W&B?

Yes, you can integrate your entire pipeline to W&B through Artifacts, including creating your dataset, splitting it, training your models and evaluating them!

This will allow complete traceability of your models.

Resources

Was this page helpful?👍👎