Skip to main content
Model Context Protocol (MCP) lets an LLM agent query and analyze data efficiently to minimize cost in tokens. This page shows how to use the W&B MCP server to query and analyze your W&B data from your IDE or MCP client and give your client programmatic access to W&B’s documentation, so it can generate more accurate responses to W&B-related queries. It integrates natively with most IDEs, coding clients, and chat agents, including:
  • Cursor
  • Visual Studio Code (VS Code)
  • Claude Code
  • Codex
  • Gemini CLI
  • Mistral LeChat
  • Claude Desktop
The W&B MCP server supports hosted and local variants. The hosted version only supports W&B Dedicated Cloud deployments. The local version supports both Dedicated Cloud and Self-Managed deployments.

W&B MCP Server capabilities

You can use the MCP server to analyze experiments, debug traces, create reports, and get help with integrating your applications with W&B features. The following example prompts demonstrate some of the types of tasks your agent can do when connected to the MCP server:
  • Show me the top 5 runs by eval/accuracy in your-team-name/your-project-name?
  • How did the latency of my hiring agent predict traces evolve over the last few months?
  • Generate a wandb report comparing the decisions made by the hiring agent last month.
  • How do I create a leaderboard in Weave - ask SupportBot?

Available tools

The W&B MCP server gives your agents access to the following tools:
ToolDescriptionExample Query
query_wandb_toolQuery W&B runs, metrics, and experiments”Show me runs with loss < 0.1”
query_weave_traces_toolAnalyze LLM traces and evaluations”What’s the average latency?“
count_weave_traces_toolCount traces and get storage metrics”How many traces failed?“
create_wandb_report_toolCreate W&B reports programmatically”Create a performance report”
query_wandb_entity_projectsList projects for an entity”What projects exist?“
query_wandb_support_botGet help from W&B documentation”How do I use sweeps?”

Use W&B’s remote MCP server

W&B provides a hosted MCP server at https://mcp.withwandb.com that requires no installation. The following instructions show how to configure the hosted server with various AI assistants and IDEs.

Prerequisites

  • A W&B Dedicated Cloud deployment.
  • A W&B API key. You can create a new one at wandb.ai/authorize.
  • Set your key as an environment variable named WANDB_API_KEY.

Configure your MCP client

Select the tab containing your MCP client’s instructions:
You can install the W&B server in Cursor automatically using a one-click installation link (requires adding Bearer <your-wandb-api-key> in the Authorization field), or manually using the following instructions:
  1. On macOS, open the Cursor menu, select Settings, and then select Cursor Settings. On Windows or Linux, open the Preferences menu, select Settings, and then select Cursor Settings.
  2. From the Cursor Settings menu, select Tools and MCP. This opens the Tools menu.
  3. In the Installed MCP Servers section, select Add Custom MCP. This opens the mcp.json configuration file.
  4. In the configuration file, in the mcpServers JSON object, add the following wandb object:
{
  "mcpServers": {
    "wandb": {
      "transport": "http",
      "url": "https://mcp.withwandb.com/mcp",
      "headers": {
        "Authorization": "Bearer <your-wandb-api-key>",
        "Accept": "application/json, text/event-stream"
      }
    }
  }
}
  1. Restart Cursor to make the changes take effect.
  2. Verify that the chat agent has access to the W&B MCP server by entering the prompt “List the projects in my W&B account.”
For more detailed information, see Cursor’s documentation.

Set up a local version of the W&B MCP server

If you need to run the MCP server locally for W&B Self-Managed deployments, development, testing, or air-gapped environments, you can install and run it on your machine.

Prerequisites

  • A W&B API key. You can create a new one at wandb.ai/authorize.
  • Set your key as an environment variable named WANDB_API_KEY.
  • Set the WANDB_BASE_URL environment variable if you are using W&B Self-Managed.
  • Python 3.10 or higher
  • uv (recommended) or pip
See uv’s docs for installation instructions.

Install and configure the MCP server

To install the MPC server locally: To install the W&B MCP server on your local machine, use one of the following installation commands:
uv install wandb-mcp-server
Once you have installed the MCP server locally, configure your MCP client to use it. Select an MCP client to continue:
Add the following to your mcp.json configuration:
{
  "mcpServers": {
    "wandb": {
      "command": "uvx",
      "args": [
        "--from",
        "git+https://github.com/wandb/wandb-mcp-server",
        "wandb_mcp_server"
      ],
      "env": {
        "WANDB_API_KEY": "<your-wandb-api-key>",
        "WANDB_BASE_URL": "https://your-wandb-instance.example.com"
      }
    }
  }
}
For web-based clients or testing, run the server with HTTP transport:
uvx wandb_mcp_server --transport http --host 0.0.0.0 --port 8080
To expose the local server to external clients like OpenAI, use ngrok:
uvx wandb_mcp_server --transport http --port 8080

# In another terminal, expose with ngrok
ngrok http 8080
If you expose the server using ngrok, update your MCP client configuration to use the ngrok URL.

Usage tips

  • Provide your W&B project and entity name: Specify the W&B entity and project in your queries for accurate results.
  • Avoid overly broad questions: Instead of “what is my best evaluation?”, ask “what eval had the highest f1 score?”
  • Verify data retrieval: When asking broad questions like “what are my best performing runs?”, ask the assistant to confirm it retrieved all available runs.