This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Serverless RL

Learn about how to more efficiently post-train your models using reinforcement learning.

Now in public preview, Serverless RL helps developers post-train LLMs to learn new behaviors and improve reliability, speed, and costs when performing multi-turn agentic tasks. W&B provision the training infrastructure (on CoreWeave) for you while allowing full flexibility in your environment’s setup. Serverless RL gives you instant access to a managed training cluster that elastically auto-scales to dozens of GPUs. By splitting RL workflows into inference and training phases and multiplexing them across jobs, Serverless RL increases GPU utilization and reduces your training time and costs.

Serverless RL is ideal for tasks like:

  • Voice agents
  • Deep research assistants
  • On-prem models
  • Content marketing analysis agents

Serverless RL trains low-rank adapters (LoRAs) to specialize a model for your agent’s specific task. This extends the original model’s capabilities with on-the-job experience. The LoRAs you train are automatically stored as artifacts in your W&B account, and can be saved locally or to a third party for backup. Models that you train through Serverless RL are also automatically hosted on W&B Inference.

Why Serverless RL?

Reinforcement learning (RL) is a set of powerful training techniques that you can use in many kinds of training setups, including on GPUs that you own or rent directly. Serverless RL can provide the following advantages in your RL post-training:

  • Lower training costs: By multiplexing shared infrastructure across many users, skipping the setup process for each job, and scaling your GPU costs down to 0 when you’re not actively training, Serverless RL reduces training costs significantly.
  • Faster training time: By splitting inference requests across many GPUs and immediately provisioning training infrastructure when you need it, Serverless RL speeds up your training jobs and lets you iterate faster.
  • Automatic deployment: Serverless RL automatically deploys every checkpoint you train, eliminating the need to manually set up hosting infrastructure. Trained models can be accessed and tested immediately in local, staging, or production environments.

How Serverless RL uses W&B services

Serverless RL uses a combination of the following W&B components to operate:

  • Inference: To run your models
  • Models: To track performance metrics during the LoRA adapter’s training
  • Artifacts: To store and version the LoRA adapters
  • Weave (optional): To gain observability into how the model responds at each step of the training loop

Serverless RL is in public preview. During the preview, you are charged only for the use of inference and the storage of artifacts. W&B does not charge for adapter training during the preview period.

1 - Use Serverless RL

Get started using Serverless RL.

Serverless RL is supported through OpenPipe’s ART framework and the W&B Training API.

To start using Serverless RL, see the ART quickstart for code examples and workflows. To learn about Serverless RL’s API endpoints, see the W&B Training API.

2 - Use your trained models

Make inference requests to the models you’ve trained.

After training a model with Serverless RL, it is automatically available for inference.

To send requests to your trained model, you need:

The model’s endpoint uses the following schema:

wandb-artifact:///<entity>/<project>/<model-name>:<step>

The schema consists of:

  • Your W&B entity’s (team) name
  • The name of the project associated with your model
  • The trained model’s name
  • The training step of the model you want to deploy (this is usually the step where the model performed best in your evaluations)

For example, if your W&B team is named email-specialists, your project is called mail-search, your trained model is named agent-001, and you wanted to deploy it on step 25, the endpoint looks like this:

wandb-artifact:///email-specialists/mail-search/agent-001:step25

Once you have your endpoint, you can integrate it into your normal inference workflows. The following examples show how to make inference requests to your trained model using a cURL request or the Python OpenAI SDK.

cURL

curl https://api.training.wandb.ai/v1/chat/completions \
    -H "Authorization: Bearer $WANDB_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
            "model": "wandb-artifact:///<entity>/<project>/<model-name>:<step>",
            "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Summarize our training run."}
            ],
            "temperature": 0.7,
            "top_p": 0.95
        }'

OpenAI SDK

from openai import OpenAI

WANDB_API_KEY = "your-wandb-api-key"
ENTITY = "my-entity"
PROJECT = "my-project"

client = OpenAI(
    base_url="https://api.training.wandb.ai/v1",
    api_key=WANDB_API_KEY
)

response = client.chat.completions.create(
    model=f"wandb-artifact:///{ENTITY}/{PROJECT}/my-model:step100",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Summarize our training run."},
    ],
    temperature=0.7,
    top_p=0.95,
)

print(response.choices[0].message.content)

3 - Usage information and limits

Understand pricing, usage limits, and account restrictions for W&B Serverless RL.

Pricing

Pricing has three components: inference, training, and storage. For specific billing rates, visit our pricing page.

Inference

Pricing for Serverless RL inference requests matches W&B Inference pricing. See model-specific costs for more details. Learn more about purchasing credits, account tiers, and usage caps in the W&B Inference docs.

Training

At each training step, Serverless RL collects batches of trajectories that include your agent’s outputs and associated rewards (calculated by your reward function). The batched trajectories are then used to update the weights of a LoRA adapter that specializes a base model for your task. The training jobs to update these LoRAs run on dedicated GPU clusters managed by Serverless RL.

Training is free during the public preview period.

Model storage

Serverless RL stores checkpoints of your trained LoRAs so you can evaluate, serve, or continue training them at any time. Storage is billed monthly based on total checkpoint size and your pricing plan. Every plan includes at least 5GB of free storage, which is enough for roughly 30 LoRAs. We recommend deleting low-performing LoRAs to save space. See the ART SDK for instructions on how to do this.

Limits

  • Inference concurrency limits: By default, Serverless RL currently supports up to 2000 concurrent requests per user and 6000 per project. If you exceed your rate limit, the Inference API returns a 429 Concurrency limit reached for requests response. To avoid this error, reduce the number of concurrent requests your training job or production workload makes at once. If you need a higher rate limit, you can request one at support@wandb.com.

  • Personal entities unsupported: Serverless RL and W&B Inference don’t support personal entities (personal accounts). To access Serverless RL, switch to a non-personal account by creating a Team. Personal entities (personal accounts) were deprecated in May 2024, so this advisory only applies to legacy accounts.

  • Geographic restrictions: Serverless RL is only available in supported geographic locations. For more information, see the Terms of Service.

4 - Available models

See the models you can train with Serverless RL.

Serverless RL currently only supports a single open-source foundation model for training.

To express interest in a particular model, contact support.

Model catalog

Model Model ID (for API usage) Type Context Window Parameters Description
Qwen2.5 14B Qwen/Qwen2.5-14B-Instruct Text 32K 14B (Active-Total) Dense model optimized for throughput and quality