Skip to main content

Prompts for LLMs

W&B Prompts is a suite of LLMOps tools built for the development of LLM-powered applications. Use W&B Prompts to visualize and inspect the execution flow of your LLMs, analyze the inputs and outputs of your LLMs, view the intermediate results and securely store and manage your prompts and LLM chain configurations.

Use Cases

W&B Prompts provides several solutions for building and monitoring LLM-based apps. Software developers, prompt engineers, ML practitioners, data scientists, and other stakeholders working with LLMs need cutting-edge tools to:

  • Explore and debug LLM chains and prompts with greater granularity.
  • Monitor and observe LLMs to better understand and evaluate performance, usage, and budgets.



W&B’s LLM tool is called TracesTraces allow you to track and visualize the inputs and outputs, execution flow, model architecture, and any intermediate results of your LLM chains.

Use Traces for LLM chaining, plug-in or pipelining use cases. You can use your own LLM chaining implementation or use a W&B integration provided by LLM libraries such as LangChain.

Traces consists of three main components:

  • Trace table: Overview of the inputs and outputs of a chain.
  • Trace timeline: Displays the execution flow of the chain and is color-coded according to component types.
  • Model architecture: View details about the structure of the chain and the parameters used to initialize each component of the chain.

Trace Table

The Trace Table provides an overview of the inputs and outputs of a chain. The trace table also provides information about the composition of a trace event in the chain, whether or not the chain ran successfully, and any error messages returned when running the chain.

Screenshot of a trace table.

Click on a row number on the left hand side of the Table to view the Trace Timeline for that instance of the chain.

Trace Timeline

The Trace Timeline view displays the execution flow of the chain and is color-coded according to component types. Select a trace event to display the inputs, outputs, and metadata of that trace.

Screenshot of a Trace Timeline.

Trace events that raise an error are outlined in red. Click on a trace event colored in red to view the returned error message.

Screenshot of a Trace Timeline error.

Model Architecture

The Model Architecture view provides details about the structure of the chain and the parameters used to initialize each component of the chain. Click on a trace event to learn more details about that event.


Weave is a visual development environment designed for building AI-powered software. It is also an open-source, interactive analytics toolkit for performant data exploration.

Use Weave to:

  • Spend less time waiting for datasets to load and more time exploring data, deriving insights, and building powerful data analytics
  • Interactively explore your data. Work with your data visually and dynamically to discover patterns that static graphs can not reveal, without using complicated APIs.
  • Monitor AI applications and models in production with real-time metrics, customizable visualizations, and interactive analysis.
  • Generate Boards to address common use cases when monitoring production models and working with LLMs.

How it works

Use Weave to view your dataframe in your notebook with only a few lines of code:

  1. First, install or update to the latest version of Weave with pip:
pip install weave --upgrade
  1. Load your dataframe into your notebook.
  2. View your dataframe with
import weave
from sklearn.datasets import load_iris

# We load in the iris dataset for demonstrative purposes
iris = load_iris(as_frame=True)
df =[])

An interactive weave dashboard will appear, similar to the animation shown below:


Weights and Biases also has lightweight integrations for:

Getting Started

We recommend you go through the Prompts Quickstart guide, which will walk you through logging a custom LLM pipeline with Trace. A colab version of the guide is also available.

Next Steps

  • Check out more detailed documentation on Weave, Trace, or our OpenAI Integration.
  • Try one of our demo colabs, which offer more detailed explanations of how to use Prompts for LLM ops and building interactive data applications.
Was this page helpful?👍👎