Documentation
English
Search…
⌃K

Advanced Configuration

How to configure the W&B Local Server installation
W&B Server starts ready-to-use on boot using wandb server start. However, several advanced configuration options are available using the /system-admin page on your server once it's up and running. You can email [email protected] to request a trial license to enable more users and teams.
The following is detailed information about the advanced configuration of a local server. When possible we suggest you use our existing Terraform to configure your instance.

Configuration as code

All configuration settings can be set via the UI however if you would like to manage these configuration options via code you can set the following environment variables:
Environment Variable
Description
LICENSE
Your wandb/local license
MYSQL
The MySQL connection string
BUCKET
The S3 / GCS bucket for storing data
BUCKET_QUEUE
The SQS / Google PubSub queue for object creation events
NOTIFICATIONS_QUEUE
The SQS queue on which to publish run events
AWS_REGION
The AWS Region where your bucket lives
HOST
The FQD of your instance, i.e. https://my.domain.net
OIDC_ISSUER
A url to your Open ID Connect identity provider, i.e. https://cognito-idp.us-east-1.amazonaws.com/us-east-1_uiIFNdacd
OIDC_CLIENT_ID
The Client ID of application in your identity provider
OIDC_AUTH_METHOD
Implicit (default) or pkce, see below for more context
SLACK_CLIENT_ID
The client ID of the Slack application you want to use for alerts
SLACK_SECRET
The secret of the Slack application you want to use for alerts
LOCAL_RESTORE
You can temporarily set this to true if you're unable to access your instance. Check the logs from the container for temporary credentials.
REDIS
Can be used to setup an external REDIS instance with W&B.
LOGGING_ENABLED
When set to true, access logs are streamed to stdout. You can also mount a sidecar container and tail /var/log/gorilla.log without setting this variable.

Host Configuration

To change the host and port that you want to deploy your wandb server instance then you can run the command
wandb server -e HOST=http://<HOST>:<PORT>
You can connect to this instance by then explicitly defining the HOST for our authentication method for wandb client. Here are various ways to perform this action.
  1. 1.
    wandb login --host=<HOST>:<PORT>
  2. 2.
    wandb.login(host="<HOST>:<PORT>")
  3. 3.
    export WANDB_BASE_URL=<HOST>:<PORT> export WANDB_API_KEY=<API-KEY>

SSO & Authentication

By default, a W&B Server runs with manual user management. Licensed versions of wandb/local also unlock SSO. Email [email protected] to schedule a time with us to configure an Auth0 tenant for you with any Identity provider they support such as SAML, Ping Federate, Active Directory, etc.
If you already use Auth0 or have an Open ID Connect compatible server, you can follow the instructions below.

Open ID Connect

wandb/local uses Open ID Connect for authentication. When creating an application client in your IDP you should choose Single Page Application or Public Client.

Setting up with AWS Cognito

Because we're only using OIDC for authentication and not authorization, public clients simplify setup
To configure an application client in your identity provider you'll need to provide an allowed callback url:
  • Add the following allowed Callback URL http(s)://YOUR-W&B-HOST/oidc/callback
  • If your IDP supports universal logout, set Logout URL to http(s)://YOUR-W&B-HOST
For example, in AWS Cognito if your application was running at https://wandb.mycompany.com:
If your instance is accessible from multiple hosts, be sure to include all of them here.
wandb/local will use the "implicit" grant with the "form_post" response type by default. You can also configure wandb/local to perform an "authorization_code" grant using the PKCE Code Exchange flow. We request the following scopes for the grant: "openid", "profile", and "email". Your identity provider will need to allow these scopes. For example in AWS Cognito the application should look like:
openid, profile, and email are required
To tell wandb/local which grant to use you can select the Auth Method in the settings page or set the OIDC_AUTH_METHOD environment variable.
For AWS Cognito providers you must set the Auth Method to "pkce"
You'll need a Client ID and the url of your OIDC issuer. The OpenID discovery document must be available at $OIDC_ISSUER/.well-known/openid-configuration For example, when using AWS Cognito you can generate your issuer url by appending your User Pool ID to the Cognito IDP url from the User Pools > App Integration tab:
The issuer URL would be https://cognito-idp.us-east-1.amazonaws.com/us-east-1_uiIFNdacd
Do not use the "Cognito domain" for the IDP url. Cognito provides it's discovery document at https://cognito-idp.$REGION.amazonaws.com/$USER_POOL_ID
Once you have everything configured you can provide the Issuer, Client ID, and Auth method to wandb/local via /system-admin or the environment variables and SSO will be configured.

Setting up with Okta

First set up a new application by navigating in your provider's UI, Click on Add apps
Name your App Integration (ex: Weights & Biases) and select grant type implicit (hybrid)
W&B also supports the Authorization Code grant type with PKCE
To configure an application client in your identity provider you'll need to provide an allowed callback url:
  • Add the following allowed Callback URL http(s)://YOUR-W&B-HOST/oidc/callback
  • If your IDP supports universal logout, set Logout URL to http(s)://YOUR-W&B-HOST
For example, if your application was running at https://localhost:8080, the redirect URI would look like https://localhost:8080/oidc/callback
Set the sign-out redirect to http(s)://YOUR-W&B-HOST/logout
Once you have everything configured you can provide the Issuer, Client ID, and Auth method to wandb/local via /system-admin or the environment variables and SSO will be configured.
Sign in to your Weights and Biases server and navigate to the System Settings page
If you're unable to login to your instance after configuring SSO, you can restart the instance with the LOCAL_RESTORE=true environment variable set. This will output a temporary password to the containers logs and disable SSO. Once you've resolved any issues with SSO, you must remove that environment variable to enable SSO again.

File Storage

By default, a W&B Enterprise Server saves files to a local data disk with a capacity that you set when you provision your instance. To support limitless file storage, you may configure your server to use an external cloud file storage bucket with an S3-compatible API.
You should always specify the bucket you're using with the BUCKET environment variable. This removes the need for a persistent volume as all settings can then be persisted to your bucket.

Amazon Web Services

To use an AWS S3 bucket as the file storage backend for W&B, you'll need to create a bucket, along with an SQS queue configured to receive object creation notifications from that bucket. Your instance will need permissions to read from this queue.
Create an S3 Bucket and Bucket Notifications
Then, create an S3 bucket. Under the bucket properties page in the console, in the "Events" section of "Advanced Settings", click "Add notification", and configure all object creation events to be sent to the SQS Queue you configured earlier.
Enterprise file storage settings
Enable CORS access: your CORS configuration should look like the following:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://YOUR-W&B-SERVER-IP</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Create an SQS Queue
First, create an SQS Standard Queue. Add a permission for all principals for the SendMessage, ReceiveMessage, ChangeMessageVisibility, DeleteMessage, and GetQueueUrl actions. If you'd like you can further lock this down using an advanced policy document. For instance, the policy for accessing SQS with a statement is as follows:
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : "*",
"Action" : ["sqs:SendMessage"],
"Resource" : "<sqs-queue-arn>",
"Condition" : {
"ArnEquals" : { "aws:SourceArn" : "<s3-bucket-arn>" }
}
}
]
}
Grant Permissions to Node Running W&B
The node on which W&B Server is running must be configured to permit access to S3 and SQS. Depending on the type of server deployment you've opted for, you may need to add the following policy statements to your node role:
{
"Statement":[
{
"Sid":"",
"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::<WANDB_BUCKET>"
},
{
"Sid":"",
"Effect":"Allow",
"Action":[
"sqs:*"
],
"Resource":"arn:aws:sqs:<REGION>:<ACCOUNT>:<WANDB_QUEUE>"
}
]
}
Configure W&B Server
Finally, navigate to the W&B settings page at http(s)://YOUR-W&B-SERVER-HOST/system-admin. Enable the "Use an external file storage backend" option, and fill in the s3 bucket, region, and SQS queue in the following format:
  • File Storage Bucket: s3://<bucket-name>
  • File Storage Region (AWS only): <region>
  • Notification Subscription: sqs://<queue-name>
Press "Update settings" to apply the new settings.

Google Cloud Platform

To use a GCP Storage bucket as a file storage backend for W&B, you'll need to create a bucket, along with a PubSub topic and subscription configured to receive object creation messages from that bucket.
Create PubSub Topic and Subscription
Navigate to Pub/Sub > Topics in the GCP Console, and click "Create topic". Choose a name and create a topic.
Then click "Create subscription" in the subscriptions table at the bottom of the page. Choose a name, and make sure Delivery Type is set to "Pull". Click "Create".
Make sure the service account or account that your instance is running as has access to this subscription.
Create Storage Bucket
Navigate to Storage > Browser in the GCP Console, and click "Create bucket". Make sure to choose "Standard" storage class.
Make sure the service account or account that your instance is running as has access to this bucket.
Create PubSub Notification
Creating a notification stream from the Storage Bucket to the PubSub Topic can unfortunately only be done in the console. Make sure you have gsutil installed, and logged into the correct GCP Project, then run the following:
gcloud pubsub topics list # list names of topics for reference
gsutil ls # list names of buckets for reference
# create bucket notification
gsutil notification create -t <TOPIC-NAME> -f json gs://<BUCKET-NAME>
Add Signing Permissions
To create signed file URLs, your W&B Server also needs the iam.serviceAccounts.signBlob permission in GCP. You can add it by adding the Service Account Token Creator role to the service account or IAM member that your instance is running as.
Grant Permissions to Node Running W&B Server
The node on which W&B Server is running must be configured to permit access to S3 and SQS. Depending on the type of server deployment you've opted for, you may need to add the following policy statements to your node role:
{
"Statement":[
{
"Sid":"",
"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::<WANDB_BUCKET>"
},
{
"Sid":"",
"Effect":"Allow",
"Action":[
"sqs:*"
],
"Resource":"arn:aws:sqs:<REGION>:<ACCOUNT>:<WANDB_QUEUE>"
}
]
}
Configure W&B Server
Finally, navigate to the W&B settings page at http(s)://YOUR-W&B-SERVER-HOST/system-admin. Enable the "Use an external file storage backend" option, and fill in the s3 bucket, region, and SQS queue in the following format:
  • File Storage Bucket: gs://<bucket-name>
  • File Storage Region: blank
  • Notification Subscription: pubsub:/<project-name>/<topic-name>/<subscription-name>
Press "update settings" to apply the new settings.

Azure

To use an Azure blob container as the file storage for W&B, you'll need to create a storage account (if you don't already have one you want to use), create a blob container and a queue within that storage account, and then create an event subscription that sends "blob created" notifications to the queue from the blob container.

Create a Storage Account

If you have a storage account you want to use already, you can skip this step.
Navigate to Storage Accounts > Add in the Azure portal. Select an Azure subscription, and select any resource group or create a new one. Enter a name for your storage account.
Azure storage account setup
Click Review and Create, and then, on the summary screen, click Create:
Azure storage account details review

Creating the blob container

Go to Storage Accounts in the Azure portal, and click on your new storage account. In the storage account dashboard, click on Blob service > Containers in the menu:
Create a new container, and set it to Private:
Go to Settings > CORS > Blob service, and enter the IP of your wandb server as an allowed origin, with allowed methods GET and PUT, and all headers allowed and exposed, then save your CORS settings.

Creating the Queue

Go to Queue service > Queues in your storage account, and create a new Queue:
Go to Events in your storage account, and create an event subscription:
Give the event subscription the Event Schema "Event Grid Schema", filter to only the "Blob Created" event type, set the Endpoint Type to Storage Queues, and then select the storage account/queue as the endpoint.
In the Filters tab, enable subject filtering for subjects beginning with /blobServices/default/containers/your-blob-container-name/blobs/

Configure W&B Server

Go to Settings > Access keys in your storage account, click "Show keys", and then copy either key1 > Key or key2 > Key. Set this key on your W&B server as the environment variable AZURE_STORAGE_KEY.
Finally, navigate to the W&B settings page at http(s)://YOUR-W&B-SERVER-HOST/system-admin. Enable the "Use an external file storage backend" option, and fill in the s3 bucket, region, and SQS queue in the following format:
  • File Storage Bucket: az://<storage-account-name>/<blob-container-name>
  • Notification Subscription: az://<storage-account-name>/<queue-name>
Press "Update settings" to apply the new settings.

Advanced Reliability Settings

Redis

Configuring an external redis server will improve the reliability of the service and enable caching which will decrease load times especially in large projects. We recommend using a managed redis service (ex: ElastiCache) with HA and the following specs:
  • Minimum 4GB of memory, suggested 8GB
  • Redis version 6.x
  • In transit encryption
  • Authentication enabled

Configuring REDIS in the W&B server

To configure the redis instance with W&B, you can navigate to the W&B settings page at http(s)://YOUR-W&B-SERVER-HOST/system-admin. Enable the "Use an external Redis instance" option, and fill in the redis connection string in the following format:
Configuring REDIS in W&B
You can also configure redis using the environment variable REDIS on the container or in your Kubernetes deployment. Alternatively, you could also setup REDIS as a Kubernetes secret.
The above assumes the redis instance is running at the default port of 6379. If you configure a different port, setup authentication and also want to have TLS enabled on the redis instance the connection string format would look something like: redis://$USER:[email protected]$HOST:$PORT?tls=true

Slack

In order to integrate your W&B Server installation with Slack, you'll need to create a suitable Slack application.

Creating the Slack application

Visit https://api.slack.com/apps and select Create New App in the top right.
You can name it whatever you like, but what's important is to select the same Slack workspace as the one you intend to use for alerts.

Configuring the Slack application

Now that we have a Slack application ready, we need to authorize for use as an OAuth bot. Select OAuth & Permissions in the sidebar to the left.
Under Scopes, supply the bot with the incoming_webhook scope.
Finally, configure the Redirect URL to point to your W&B installation. You should use the same value as what you set Frontend Host to in your local system settings. You can specify multiple URLs if you have different DNS mappings to your instance.
Hit Save URLs once finished.
To further secure your Slack application and prevent abuse, you can specify an IP range under Restrict API Token Usage, whitelisting the IP or IP range of your W&B instance(s).

Register your Slack application with W&B

Navigate to the System Settings page of your W&B instance. Check the box to enable a custom Slack application:
You'll need to supply your Slack application's client ID and secret, which you can find in the Basic Information tab.
That's it! You can now verify that everything is working by setting up a Slack integration in the W&B app. Visit this page for more detailed information.