Use Serverless RL
Get started using Serverless RL.
2 minute read
Now in public preview, Serverless RL helps developers post-train LLMs to learn new behaviors and improve reliability, speed, and costs when performing multi-turn agentic tasks. W&B provision the training infrastructure (on CoreWeave) for you while allowing full flexibility in your environment’s setup. Serverless RL gives you instant access to a managed training cluster that elastically auto-scales to dozens of GPUs. By splitting RL workflows into inference and training phases and multiplexing them across jobs, Serverless RL increases GPU utilization and reduces your training time and costs.
Serverless RL is ideal for tasks like:
Serverless RL trains low-rank adapters (LoRAs) to specialize a model for your agent’s specific task. This extends the original model’s capabilities with on-the-job experience. The LoRAs you train are automatically stored as artifacts in your W&B account, and can be saved locally or to a third party for backup. Models that you train through Serverless RL are also automatically hosted on W&B Inference.
Reinforcement learning (RL) is a set of powerful training techniques that you can use in many kinds of training setups, including on GPUs that you own or rent directly. Serverless RL can provide the following advantages in your RL post-training:
Serverless RL uses a combination of the following W&B components to operate:
Serverless RL is in public preview. During the preview, you are charged only for the use of inference and the storage of artifacts. W&B does not charge for adapter training during the preview period.
Get started using Serverless RL.
Make inference requests to the models you’ve trained.
Understand pricing, usage limits, and account restrictions for W&B Serverless RL.
See the models you can train with Serverless RL.
Was this page helpful?
Glad to hear it! If you have more to say, please let us know.
Sorry to hear that. Please tell us how we can improve.