On Prem / Baremetal

Hosting W&B Server on baremetal servers on-premises
Run your bare metal infrastructure that connects to scaleable external data stores with W&B Server. See the following for instructions on how to provision a new instance and guidance on provisioning external data stores.
W&B application performance depends on scalable data stores that your operations team must configure and manage. The team must provide a MySQL 5.7 or MySQL 8 database server and an S3 compatible object store for the application to scale properly.
Talk to our sales team by reaching out to [email protected].

MySQL Database

W&B currently supports MySQL 5.7 or MySQL 8.0.28 and above.

MySQL 5.7

There are a number of enterprise services that make operating a scalable MySQL database simpler. We suggest looking into one of the following solutions:

MySQL 8.0

The Weights & Biases application currently only supportsMySQL 8versions8.0.28and above.
There are some additional performance tunings required when running your W&B server with MySQL 8.0 or when upgrading from MySQL 5.7 to 8.0. Tuning your database engine with the following settings will improve the overall query performance of the wandb application:
binlog_format = 'ROW'
innodb_online_alter_log_max_size = 268435456
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
binlog_row_image = 'MINIMAL'
Due to some changes in the way that MySQL 8.0 handles sort_buffer_size, you may need to update the sort_buffer_size parameter from its default value of 262144. Our recommendation is to set the value to 33554432(32MiB) in order for the database to efficiently work with the wandb application. Note that, this only works with MySQL versions 8.0.28 and above.
The most important things to consider when running your own MySQL database are:
  1. 1.
    Backups. You should be periodically backing up the database to a separate facility. We suggest daily backups with at least 1 week of retention.
  2. 2.
    Performance. The disk the server is running on should be fast. We suggest running the database on an SSD or accelerated NAS.
  3. 3.
    Monitoring. The database should be monitored for load. If CPU usage is sustained at > 40% of the system for more than 5 minutes it's likely a good indication the server is resource starved.
  4. 4.
    Availability. Depending on your availability and durability requirements you may want to configure a hot standby on a separate machine that streams all updates in realtime from the primary server and can be used to failover to incase the primary server crashes or become corrupted.
Once you've provisioned a compatible MySQL database you can create a database and a user using the following SQL (replacing SOME_PASSWORD with password of your choice).
CREATE DATABASE wandb_local CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
GRANT ALL ON wandb_local.* TO 'wandb_local'@'%' WITH GRANT OPTION;

Object Store

The object store can be an externally hosted Minio cluster, or W&B supports any S3 compatible object store that has support for signed urls. To see if your object store supports signed urls, you can run the following script. When connecting to an S3 compatible object store you can specify your credentials in the connection string, i.e.
s3://$ACCESS_KEY:[email protected]$HOST/$BUCKET_NAME
By default we assume 3rd party object stores are not running over HTTPS. If you've configured a trusted SSL certificate for your object store, you can tell us to only connect over tls by adding the tls query parameter to the url, i.e.
This will only work if the SSL certificate is trusted. We do not support self-signed certificates.
s3://$ACCESS_KEY:[email protected]$HOST/$BUCKET_NAME?tls=true
When using 3rd party object stores, you'll want to set BUCKET_QUEUE to internal://. This tells the W&B server to manage all object notifications internally instead of depending on an external SQS queue or equivalent.
The most important things to consider when running your own object store are:
  1. 1.
    Storage capacity and performance. It's fine to use magnetic disks, but you should be monitoring the capacity of these disks. Average W&B usage results in 10's to 100's of Gigabytes. Heavy usage could result in Petabytes of storage consumption.
  2. 2.
    Fault tolerance. At a minimum, the physical disk storing the objects should be on a RAID array. If you're using minio, consider running it in distributed mode.
  3. 3.
    Availability. Monitoring should be configured to ensure the storage is available.
There are many enterprise alternatives to running your own object storage service such as:

Minio setup

If you're using minio, you can run the following commands to create a bucket .
mc config host add local http://$MINIO_HOST:$MINIO_PORT "$MINIO_ACCESS_KEY" "$MINIO_SECRET_KEY" --api s3v4
mc mb --region=us-east1 local/local-files

Kubernetes Deployment

The following k8s yaml can be customized but should serve as a basic foundation for configuring local in Kubernetes.
apiVersion: apps/v1
kind: Deployment
name: wandb
app: wandb
type: RollingUpdate
replicas: 1
app: wandb
app: wandb
- name: wandb
- name: HOST
value: https://YOUR_DNS_NAME
- name: LICENSE
- name: BUCKET
value: s3://$ACCESS_KEY:[email protected]$HOST/$BUCKET_NAME
value: internal://
- name: AWS_REGION
value: us-east-1
- name: MYSQL
value: mysql://$USERNAME:[email protected]$HOSTNAME/$DATABASE
imagePullPolicy: IfNotPresent
image: wandb/local:latest
- name: http
containerPort: 8080
protocol: TCP
path: /healthz
port: http
path: /ready
port: http
path: /ready
port: http
failureThreshold: 60 # allow 10 minutes for migrations
cpu: "2000m"
memory: 4G
cpu: "4000m"
memory: 8G
apiVersion: v1
kind: Service
name: wandb-service
type: NodePort
app: wandb
- protocol: TCP
port: 80
targetPort: 8080
kind: Ingress
name: wandb-ingress
annotations: nginx
name: wandb-service
number: 80
The k8s YAML above should work in most on-premises installations. However the details of your Ingress and optional SSL termination will vary. See networking below.

Helm Chart

W&B also supports deploying via a Helm Chart. The official W&B helm chart can be found here.


W&B supports operating from within an Openshift kubernetes cluster. Simply follow the instructions in the kubernetes deployment section above.

Running the container as an un-privileged user

By default the container will run with a $UID of 999. If you're orchestrator requires the container be run with a non-root user you can specify a $UID >= 100000 and a $GID of 0. We must be started as the root group ($GID=0) for file system permissions to function properly. This is the default behavior when running containers in Openshift. An example security context for kubernetes would looks like:
runAsUser: 100000
runAsGroup: 0


You can run wandb/local on any instance that also has Docker installed. We suggest at least 8GB of RAM and 4vCPU's. Simply run the following command to launch the container:
docker run --rm -d \
-e HOST=https://YOUR_DNS_NAME \
-e BUCKET_QUEUE=internal:// \
-e AWS_REGION=us-east1 \
-p 8080:8080 --name wandb-local wandb/local
You'll want to configure a process manager to ensure this process is restarted if it crashes. A good overview of using SystemD to do this can be found here.


Load Balancer

You'll want to run a load balancer that terminates network requests at the appropriate network boundary. Some customers expose their wandb service on the internet, others only expose it on an internal VPN/VPC. It's important that both the machines being used to execute machine learning payloads and the devices users access the service through web browsers can communicate to this endpoint. Common load balancers include:
  1. 2.
  2. 3.
  3. 4.
  4. 5.
  5. 6.


The W&B server does not terminate SSL. If your security policies require SSL communication within your trusted networks consider using a tool like Istio and side car containers. The load balancer itself should terminate SSL with a valid certificate. Using self-signed certificates is not supported and will cause a number of challenges for users. If possible using a service like Let's Encrypt is a great way to provided trusted certificates to your load balancer. Services like Caddy and Cloudflare manage SSL for you.

Example Nginx Configuration

The following is an example configuration using nginx as a reverse proxy.
events {}
http {
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
# Also, in the above case, force HTTPS
map $http_x_forwarded_proto $sts {
default '';
"https" "max-age=31536000; includeSubDomains";
# If we receive X-Forwarded-Host, pass it though; otherwise, pass along $http_host
map $http_x_forwarded_host $proxy_x_forwarded_host {
default $http_x_forwarded_host;
'' $http_host;
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
server {
listen 443 ssl;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $proxy_x_forwarded_host;
location / {
proxy_pass http://$YOUR_UPSTREAM_SERVER_IP:8080/;
keepalive_timeout 10;

Verifying your installation

Regardless of how your server was installed, it's a good idea everything is configured properly. W&B makes it easy to verify everything is properly configured by using our CLI.
pip install wandb
wandb login --host=https://YOUR_DNS_DOMAIN
wandb verify
If you see any errors contact W&B support staff. You can also see any errors the application hit at startup by checking the logs.


docker logs wandb-local


kubectl get pods
kubectl logs wandb-XXXXX-XXXXX