Skip to main content
Weave is built on three foundational primitives that work together across all features:

Ops

An Op is a versioned, tracked function. When you decorate a function with @weave.op() (Python) or wrap it with weave.op() (TypeScript), Weave automatically captures its code, inputs, outputs, and execution metadata.
import weave

@weave.op()
def generate_response(prompt: str) -> str:
    # Your LLM logic here
    return response
Ops are the building blocks of tracing, evaluation scorers, and any tracked computation.

Objects

An Object is versioned, serializable data. Weave automatically versions objects when they change, creating an immutable history. Objects include:
  • Datasets: Collections of examples for evaluation
  • Models: Configurations and parameters for your LLM logic
  • Prompts: Versioned prompt templates
dataset = weave.Dataset(
    name="test-cases",
    rows=[
        {"input": "What is 2+2?", "expected": "4"},
        {"input": "What is the capital of France?", "expected": "Paris"},
    ]
)
weave.publish(dataset)

Calls

A Call is a logged execution of an Op. Every time an Op runs, Weave creates a Call that captures:
  • Input arguments
  • Output value
  • Timing and latency
  • Parent-child relationships (for nested calls)
  • Any errors that occurred
Calls form the backbone of Weave’s tracing system and provide the data for debugging, analysis, and evaluation.

Models

A Weave Model is a combination of configuration options and code that defines how you want to use an LLM, such as its name, prompts, temperature, and API settings. You can use models to track and version your application’s parameters and behavior. Using the weave.Model class, you can create templates of the attributes, logic, and parameters you want to track for your models. For example, the following model uses OpenAI to generate a response to a prompt:
import weave
from weave import Model

class MyApp(weave.Model):
    # These parameters are tracked and versioned
    model_name: str
    system_prompt: str
    temperature: float = 0.7

    @weave.op()
    def predict(self, user_input: str) -> str:
        response = openai.Completion.create(model=self.model_name, prompt=self.system_prompt, temperature=self.temperature)
        return response
You can then instantiate the model and use it to generate a response:
model = MyApp(model_name="gpt-4o", system_prompt="You are a helpful assistant")
response = model.predict("What is the capital of France?")
print(response)
This tracks the model’s parameters and behavior, and you can version it when you change its parameters or code.
The weave.Model class is not currently supported by the Weave TypeScript SDK.