The main threads of Weave
Weave provides the following core functionality:- Visibility into every LLM call, input, and output in your application.
- Systematic evaluation to measure performance against curated test cases.
- Version tracking for prompts, models, and data so you can understand what changed.
- Experimentation with different prompt and model comparisons.
- Feedback collection to capture human judgments and annotations.
- Monitoring in production using guardrails and scorers for LLM safety and quality.
Traces
Track end-to-end how data flows through your LLM application.- See inputs and outputs of each application usage.
- See source documents used to produce the LLM feedback.
- See cost, token count, and latency of LLM calls.
- Drill down into specific prompts and how answers are produced.
- Collect feedback on responses from users.
- In your code, you can use Weave ops and calls to track what your functions are doing.
Evaluations
Systematically benchmark your LLM applicationâs performance to gain confidence when deploying to production.- Easily track which versions of model/prompt resulted in what performance.
- Define metrics to evaluate responses using one or more scoring functions.
- Compare two or more different evaluations over multiple metrics. Contrast specific samples for their performance.
Version everything
Weave tracks versions of your prompts, datasets, and model configurations. When something breaks, you can see exactly what changed. When something works, you can reproduce it. Learn about versioningExperiment with prompts and models
Bring your API keys and quickly test prompts and compare responses from various commercial models using the Playground. Experiment in the Weave PlaygroundCollect feedback
Capture human feedback, annotations, and corrections from production use. Use this data to build better test cases and improve your application. Collect feedbackMonitor production
Score production traffic with the same scorers you use in evaluation. Set up guardrails to catch issues before they reach users. Set up guardrails and monitorsGet started using Weave
Weave provides SDKs for Python and TypeScript. Both SDKs support tracing, evaluation, datasets, and the core Weave features. Some advanced features like class-based Models and Scorers are currently not available for the Weave TypeScript SDK. To get started using Weave:- Create a Weights & Biases account at https://wandb.ai/site and get your API key from https://wandb.ai/authorize
- Install Weave:
- In your script, import Weave and initialize a project:
- Beyond relying on the supported integrations, you can also use Weave to log traces for custom functions by adding one line to your call function.
@weave.op() (in Python), or wrap it with weave.op() (in TypeScript), Weave automatically captures its code, inputs, outputs, and execution metadata.