This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Run W&B Server on Kubernetes

Deploy W&B Platform with Kubernetes Operator

W&B Kubernetes Operator

Use the W&B Kubernetes Operator to simplify deploying, administering, troubleshooting, and scaling your W&B Server deployments on Kubernetes. You can think of the operator as a smart assistant for your W&B instance.

The W&B Server architecture and design continuously evolves to expand AI developer tooling capabilities, and to provide appropriate primitives for high performance, better scalability, and easier administration. That evolution applies to the compute services, relevant storage and the connectivity between them. To help facilitate continuous updates and improvements across deployment types, W&B users a Kubernetes operator.

For more information about Kubernetes operators, see Operator pattern in the Kubernetes documentation.

Reasons for the architecture shift

Historically, the W&B application was deployed as a single deployment and pod within a Kubernetes Cluster or a single Docker container. W&B has, and continues to recommend, to externalize the Database and Object Store. Externalizing the Database and Object store decouples the application’s state.

As the application grew, the need to evolve from a monolithic container to a distributed system (microservices) was apparent. This change facilitates backend logic handling and seamlessly introduces built-in Kubernetes infrastructure capabilities. Distributed systems also supports deploying new services essential for additional features that W&B relies on.

Before 2024, any Kubernetes-related change required manually updating the terraform-kubernetes-wandb Terraform module. Updating the Terraform module ensures compatibility across cloud providers, configuring necessary Terraform variables, and executing a Terraform apply for each backend or Kubernetes-level change.

This process was not scalable since W&B Support had to assist each customer with upgrading their Terraform module.

The solution was to implement an operator that connects to a central deploy.wandb.ai server to request the latest specification changes for a given release channel and apply them. Updates are received as long as the license is valid. Helm is used as both the deployment mechanism for the W&B operator and the means for the operator to handle all configuration templating of the W&B Kubernetes stack, Helm-ception.

How it works

You can install the operator with helm or from the source. See charts/operator for detailed instructions.

The installation process creates a deployment called controller-manager and uses a custom resource definition named weightsandbiases.apps.wandb.com (shortName: wandb), that takes a single spec and applies it to the cluster:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: weightsandbiases.apps.wandb.com

The controller-manager installs charts/operator-wandb based on the spec of the custom resource, release channel, and a user defined config. The configuration specification hierarchy enables maximum configuration flexibility at the user end and enables W&B to release new images, configurations, features, and Helm updates automatically.

Refer to the configuration specification hierarchy and configuration reference for configuration options.

Configuration specification hierarchy

Configuration specifications follow a hierarchical model where higher-level specifications override lower-level ones. Here’s how it works:

  • Release Channel Values: This base level configuration sets default values and configurations based on the release channel set by W&B for the deployment.
  • User Input Values: Users can override the default settings provided by the Release Channel Spec through the System Console.
  • Custom Resource Values: The highest level of specification, which comes from the user. Any values specified here override both the User Input and Release Channel specifications. For a detailed description of the configuration options, see Configuration Reference.

This hierarchical model ensures that configurations are flexible and customizable to meet varying needs while maintaining a manageable and systematic approach to upgrades and changes.

Requirements to use the W&B Kubernetes Operator

Satisfy the following requirements to deploy W&B with the W&B Kubernetes operator:

Refer to the reference architecture. In addition, obtain a valid W&B Server license.

See this guide for a detailed explanation on how to set up and configure a self-managed installation.

Depending on the installation method, you might need to meet the following requirements:

  • Kubectl installed and configured with the correct Kubernetes cluster context.
  • Helm is installed.

Air-gapped installations

See the Deploy W&B in airgapped environment with Kubernetes tutorial on how to install the W&B Kubernetes Operator in an airgapped environment.

Deploy W&B Server application

This section describes different ways to deploy the W&B Kubernetes operator.

Choose one of the following:

  • If you have provisioned all required external services and want to deploy W&B onto Kubernetes with Helm CLI, continue here.
  • If you prefer managing infrastructure and the W&B Server with Terraform, continue here.
  • If you want to utilize the W&B Cloud Terraform Modules, continue here.

Deploy W&B with Helm CLI

W&B provides a Helm Chart to deploy the W&B Kubernetes operator to a Kubernetes cluster. This approach allows you to deploy W&B Server with Helm CLI or a continuous delivery tool like ArgoCD. Make sure that the above mentioned requirements are in place.

Follow those steps to install the W&B Kubernetes Operator with Helm CLI:

  1. Add the W&B Helm repository. The W&B Helm chart is available in the W&B Helm repository. Add the repo with the following commands:
helm repo add wandb https://charts.wandb.ai
helm repo update
  1. Install the Operator on a Kubernetes cluster. Copy and paste the following:
helm upgrade --install operator wandb/operator -n wandb-cr --create-namespace
  1. Configure the W&B operator custom resource to trigger the W&B Server installation. Copy this example configuration to a file named operator.yaml, so that you can customioze your W&B deployment. Refer to Configuration Reference.

    apiVersion: apps.wandb.com/v1
    kind: WeightsAndBiases
    metadata:
      labels:
        app.kubernetes.io/instance: wandb
        app.kubernetes.io/name: weightsandbiases
      name: wandb
      namespace: default
    
    spec:
      chart:
        url: http://charts.yourdomain.com
        name: operator-wandb
        version: 0.18.0
    
      values:
        global:
          host: https://wandb.yourdomain.com
          license: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
          bucket:
            accessKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            secretKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            name: s3.yourdomain.com:port #Ex.: s3.yourdomain.com:9000
            path: bucket_name
            provider: s3
            region: us-east-1
          mysql:
            database: wandb
            host: mysql.home.lab
            password: password
            port: 3306
            user: wandb
          extraEnv:
            ENABLE_REGISTRY_UI: 'true'
    
        # Ensure it's set to use your own MySQL
        mysql:
          install: false
    
        app:
          image:
            repository: registry.yourdomain.com/local
            tag: 0.59.2
    
        console:
          image:
            repository: registry.yourdomain.com/console
            tag: 2.12.2
    
        ingress:
          annotations:
            nginx.ingress.kubernetes.io/proxy-body-size: 64m
          class: nginx
    

    Start the Operator with your custom configuration so that it can install and configure the W&B Server application.

    kubectl apply -f operator.yaml
    

    Wait until the deployment completes. This takes a few minutes.

  2. To verify the installation using the web UI, create the first admin user account, then follow the verification steps outlined in Verify the installation.

Deploy W&B with Helm Terraform Module

This method allows for customized deployments tailored to specific requirements, leveraging Terraform’s infrastructure-as-code approach for consistency and repeatability. The official W&B Helm-based Terraform Module is located here.

The following code can be used as a starting point and includes all necessary configuration options for a production grade deployment.

module "wandb" {
  source  = "wandb/wandb/helm"

  spec = {
    values = {
      global = {
        host    = "https://<HOST_URI>"
        license = "eyJhbGnUzaH...j9ZieKQ2x5GGfw"

        bucket = {
          <details depend on the provider>
        }

        mysql = {
          <redacted>
        }
      }

      ingress = {
        annotations = {
          "a" = "b"
          "x" = "y"
        }
      }
    }
  }
}

Note that the configuration options are the same as described in Configuration Reference, but that the syntax has to follow the HashiCorp Configuration Language (HCL). The Terraform module creates the W&B custom resource definition (CRD).

To see how W&B&Biases themselves use the Helm Terraform module to deploy “Dedicated cloud” installations for customers, follow those links:

Deploy W&B with W&B Cloud Terraform modules

W&B provides a set of Terraform Modules for AWS, GCP and Azure. Those modules deploy entire infrastructures including Kubernetes clusters, load balancers, MySQL databases and so on as well as the W&B Server application. The W&B Kubernetes Operator is already pre-baked with those official W&B cloud-specific Terraform Modules with the following versions:

Terraform Registry Source Code Version
AWS https://github.com/wandb/terraform-aws-wandb v4.0.0+
Azure https://github.com/wandb/terraform-azurerm-wandb v2.0.0+
GCP https://github.com/wandb/terraform-google-wandb v2.0.0+

This integration ensures that W&B Kubernetes Operator is ready to use for your instance with minimal setup, providing a streamlined path to deploying and managing W&B Server in your cloud environment.

For a detailed description on how to use these modules, refer to this section to self-managed installations section in the docs.

Verify the installation

To verify the installation, W&B recommends using the W&B CLI. The verify command executes several tests that verify all components and configurations.

Follow these steps to verify the installation:

  1. Install the W&B CLI:

    pip install wandb
    
  2. Log in to W&B:

    wandb login --host=https://YOUR_DNS_DOMAIN
    

    For example:

    wandb login --host=https://wandb.company-name.com
    
  3. Verify the installation:

    wandb verify
    

A successful installation and fully working W&B deployment shows the following output:

Default host selected:  https://wandb.company-name.com
Find detailed logs for this test at: /var/folders/pn/b3g3gnc11_sbsykqkm3tx5rh0000gp/T/tmpdtdjbxua/wandb
Checking if logged in...................................................✅
Checking signed URL upload..............................................✅
Checking ability to send large payloads through proxy...................✅
Checking requests to base url...........................................✅
Checking requests made over signed URLs.................................✅
Checking CORs configuration of the bucket...............................✅
Checking wandb package version is up to date............................✅
Checking logged metrics, saving and downloading a file..................✅
Checking artifact save and download workflows...........................✅

Access the W&B Management Console

The W&B Kubernetes operator comes with a management console. It is located at ${HOST_URI}/console, for example https://wandb.company-name.com/ console.

There are two ways to log in to the management console:

  1. Open the W&B application in the browser and login. Log in to the W&B application with ${HOST_URI}/, for example https://wandb.company-name.com/

  2. Access the console. Click on the icon in the top right corner and then click System console. Only users with admin privileges can see the System console entry.

  1. Open console application in browser. Open the above described URL, which redirects you to the login screen:
  2. Retrieve the password from the Kubernetes secret that the installation generates:
    kubectl get secret wandb-password -o jsonpath='{.data.password}' | base64 -d
    
    Copy the password.
  3. Login to the console. Paste the copied password, then click Login.

Update the W&B Kubernetes operator

This section describes how to update the W&B Kubernetes operator.

Copy and paste the code snippets below into your terminal.

  1. First, update the repo with helm repo update:

    helm repo update
    
  2. Next, update the Helm chart with helm upgrade:

    helm upgrade operator wandb/operator -n wandb-cr --reuse-values
    

Update the W&B Server application

You no longer need to update W&B Server application if you use the W&B Kubernetes operator.

The operator automatically updates your W&B Server application when a new version of the software of W&B is released.

Migrate self-managed instances to W&B Operator

The proceeding section describe how to migrate from self-managing your own W&B Server installation to using the W&B Operator to do this for you. The migration process depends on how you installed W&B Server:

Migrate to Operator-based AWS Terraform Modules

For a detailed description of the migration process, continue here.

Migrate to Operator-based GCP Terraform Modules

Reach out to Customer Support or your W&B team if you have any questions or need assistance.

Migrate to Operator-based Azure Terraform Modules

Reach out to Customer Support or your W&B team if you have any questions or need assistance.

Migrate to Operator-based Helm chart

Follow these steps to migrate to the Operator-based Helm chart:

  1. Get the current W&B configuration. If W&B was deployed with an non-operator-based version of the Helm chart, export the values like this:

    helm get values wandb
    

    If W&B was deployed with Kubernetes manifests, export the values like this:

    kubectl get deployment wandb -o yaml
    

    You now have all the configuration values you need for the next step.

  2. Create a file called operator.yaml. Follow the format described in the Configuration Reference. Use the values from step 1.

  3. Scale the current deployment to 0 pods. This step is stops the current deployment.

    kubectl scale --replicas=0 deployment wandb
    
  4. Update the Helm chart repo:

    helm repo update
    
  5. Install the new Helm chart:

    helm upgrade --install operator wandb/operator -n wandb-cr --create-namespace
    
  6. Configure the new helm chart and trigger W&B application deployment. Apply the new configuration.

    kubectl apply -f operator.yaml
    

    The deployment takes a few minutes to complete.

  7. Verify the installation. Make sure that everything works by following the steps in Verify the installation.

  8. Remove to old installation. Uninstall the old helm chart or delete the resources that were created with manifests.

Migrate to Operator-based Terraform Helm chart

Follow these steps to migrate to the Operator-based Helm chart:

  1. Prepare Terraform config. Replace the Terraform code from the old deployment in your Terraform config with the one that is described here. Set the same variables as before. Do not change .tfvars file if you have one.
  2. Execute Terraform run. Execute terraform init, plan and apply
  3. Verify the installation. Make sure that everything works by following the steps in Verify the installation.
  4. Remove to old installation. Uninstall the old helm chart or delete the resources that were created with manifests.

Configuration Reference for W&B Server

This section describes the configuration options for W&B Server application. The application receives its configuration as custom resource definition named WeightsAndBiases. Some configuration options are exposed with the below configuration, some need to be set as environment variables.

The documentation has two lists of environment variables: basic and advanced. Only use environment variables if the configuration option that you need are not exposed using Helm Chart.

The W&B Server application configuration file for a production deployment requires the following contents. This YAML file defines the desired state of your W&B deployment, including the version, environment variables, external resources like databases, and other necessary settings.

apiVersion: apps.wandb.com/v1
kind: WeightsAndBiases
metadata:
  labels:
    app.kubernetes.io/name: weightsandbiases
    app.kubernetes.io/instance: wandb
  name: wandb
  namespace: default
spec:
  values:
    global:
      host: https://<HOST_URI>
      license: eyJhbGnUzaH...j9ZieKQ2x5GGfw
      bucket:
        <details depend on the provider>
      mysql:
        <redacted>
    ingress:
      annotations:
        <redacted>

Find the full set of values in the W&B Helm repository, and change only those values you need to override.

Complete example

This is an example configuration that uses GCP Kubernetes with GCP Ingress and GCS (GCP Object storage):

apiVersion: apps.wandb.com/v1
kind: WeightsAndBiases
metadata:
  labels:
    app.kubernetes.io/name: weightsandbiases
    app.kubernetes.io/instance: wandb
  name: wandb
  namespace: default
spec:
  values:
    global:
      host: https://abc-wandb.sandbox-gcp.wandb.ml
      bucket:
        name: abc-wandb-moving-pipefish
        provider: gcs
      mysql:
        database: wandb_local
        host: 10.218.0.2
        name: wandb_local
        password: 8wtX6cJHizAZvYScjDzZcUarK4zZGjpV
        port: 3306
        user: wandb
      license: eyJhbGnUzaHgyQjQyQWhEU3...ZieKQ2x5GGfw
    ingress:
      annotations:
        ingress.gcp.kubernetes.io/pre-shared-cert: abc-wandb-cert-creative-puma
        kubernetes.io/ingress.class: gce
        kubernetes.io/ingress.global-static-ip-name: abc-wandb-operator-address

Host

 # Provide the FQDN with protocol
global:
  # example host name, replace with your own
  host: https://wandb.example.com

Object storage (bucket)

AWS

global:
  bucket:
    provider: "s3"
    name: ""
    kmsKey: ""
    region: ""

GCP

global:
  bucket:
    provider: "gcs"
    name: ""

Azure

global:
  bucket:
    provider: "az"
    name: ""
    secretKey: ""

Other providers (Minio, Ceph, etc.)

For other S3 compatible providers, set the bucket configuration as follows:

global:
  bucket:
    # Example values, replace with your own
    provider: s3
    name: storage.example.com
    kmsKey: null
    path: wandb
    region: default
    accessKey: 5WOA500...P5DK7I
    secretKey: HDKYe4Q...JAp1YyjysnX

For S3-compatible storage hosted outside of AWS, kmsKey must be null.

To reference accessKey and secretKey from a secret:

global:
  bucket:
    # Example values, replace with your own
    provider: s3
    name: storage.example.com
    kmsKey: null
    path: wandb
    region: default
    secret:
      secretName: bucket-secret
      accessKeyName: ACCESS_KEY
      secretKeyName: SECRET_KEY

MySQL

global:
   mysql:
     # Example values, replace with your own
     host: db.example.com
     port: 3306
     database: wandb_local
     user: wandb
     password: 8wtX6cJH...ZcUarK4zZGjpV 

To reference the password from a secret:

global:
   mysql:
     # Example values, replace with your own
     host: db.example.com
     port: 3306
     database: wandb_local
     user: wandb
     passwordSecret:
       name: database-secret
       passwordKey: MYSQL_WANDB_PASSWORD

License

global:
  # Example license, replace with your own
  license: eyJhbGnUzaHgyQjQy...VFnPS_KETXg1hi

To reference the license from a secret:

global:
  licenseSecret:
    name: license-secret
    key: CUSTOMER_WANDB_LICENSE

Ingress

To identify the ingress class, see this FAQ entry.

Without TLS

global:
# IMPORTANT: Ingress is on the same level in the YAML as ‘global’ (not a child)
ingress:
  class: ""

With TLS

Create a secret that contains the certificate

kubectl create secret tls wandb-ingress-tls --key wandb-ingress-tls.key --cert wandb-ingress-tls.crt

Reference the secret in the ingress configuration

global:
# IMPORTANT: Ingress is on the same level in the YAML as ‘global’ (not a child)
ingress:
  class: ""
  annotations:
    {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  tls: 
    - secretName: wandb-ingress-tls
      hosts:
        - <HOST_URI>

In case of Nginx you might have to add the following annotation:

ingress:
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 64m

Custom Kubernetes ServiceAccounts

Specify custom Kubernetes service accounts to run the W&B pods.

The following snippet creates a service account as part of the deployment with the specified name:

app:
  serviceAccount:
    name: custom-service-account
    create: true

parquet:
  serviceAccount:
    name: custom-service-account
    create: true

global:
  ...

The subsystems “app” and “parquet” run under the specified service account. The other subsystems run under the default service account.

If the service account already exists on the cluster, set create: false:

app:
  serviceAccount:
    name: custom-service-account
    create: false

parquet:
  serviceAccount:
    name: custom-service-account
    create: false
    
global:
  ...

You can specify service accounts on different subsystems such as app, parquet, console, and others:

app:
  serviceAccount:
    name: custom-service-account
    create: true

console:
  serviceAccount:
    name: custom-service-account
    create: true

global:
  ...

The service accounts can be different between the subsystems:

app:
  serviceAccount:
    name: custom-service-account
    create: false

console:
  serviceAccount:
    name: another-custom-service-account
    create: true

global:
  ...

External Redis

redis:
  install: false

global:
  redis:
    host: ""
    port: 6379
    password: ""
    parameters: {}
    caCert: ""

To reference the password from a secret:

kubectl create secret generic redis-secret --from-literal=redis-password=supersecret

Reference it in below configuration:

redis:
  install: false

global:
  redis:
    host: redis.example
    port: 9001
    auth:
      enabled: true
      secret: redis-secret
      key: redis-password

LDAP

Without TLS

global:
  ldap:
    enabled: true
    # LDAP server address including "ldap://" or "ldaps://"
    host:
    # LDAP search base to use for finding users
    baseDN:
    # LDAP user to bind with (if not using anonymous bind)
    bindDN:
    # Secret name and key with LDAP password to bind with (if not using anonymous bind)
    bindPW:
    # LDAP attribute for email and group ID attribute names as comma separated string values.
    attributes:
    # LDAP group allow list
    groupAllowList:
    # Enable LDAP TLS
    tls: false

With TLS

The LDAP TLS cert configuration requires a config map pre-created with the certificate content.

To create the config map you can use the following command:

kubectl create configmap ldap-tls-cert --from-file=certificate.crt

And use the config map in the YAML like the example below

global:
  ldap:
    enabled: true
    # LDAP server address including "ldap://" or "ldaps://"
    host:
    # LDAP search base to use for finding users
    baseDN:
    # LDAP user to bind with (if not using anonymous bind)
    bindDN:
    # Secret name and key with LDAP password to bind with (if not using anonymous bind)
    bindPW:
    # LDAP attribute for email and group ID attribute names as comma separated string values.
    attributes:
    # LDAP group allow list
    groupAllowList:
    # Enable LDAP TLS
    tls: true
    # ConfigMap name and key with CA certificate for LDAP server
    tlsCert:
      configMap:
        name: "ldap-tls-cert"
        key: "certificate.crt"

OIDC SSO

global: 
  auth:
    sessionLengthHours: 720
    oidc:
      clientId: ""
      secret: ""
      # Only include if your IdP requires it.
      authMethod: ""
      issuer: ""

authMethod is optional.

SMTP

global:
  email:
    smtp:
      host: ""
      port: 587
      user: ""
      password: ""

Environment Variables

global:
  extraEnv:
    GLOBAL_ENV: "example"

Custom certificate authority

customCACerts is a list and can take many certificates. Certificate authorities specified in customCACerts only apply to the W&B Server application.

global:
  customCACerts:
  - |
    -----BEGIN CERTIFICATE-----
    MIIBnDCCAUKgAwIBAg.....................fucMwCgYIKoZIzj0EAwIwLDEQ
    MA4GA1UEChMHSG9tZU.....................tZUxhYiBSb290IENBMB4XDTI0
    MDQwMTA4MjgzMFoXDT.....................oNWYggsMo8O+0mWLYMAoGCCqG
    SM49BAMCA0gAMEUCIQ.....................hwuJgyQRaqMI149div72V2QIg
    P5GD+5I+02yEp58Cwxd5Bj2CvyQwTjTO4hiVl1Xd0M0=
    -----END CERTIFICATE-----    
  - |
    -----BEGIN CERTIFICATE-----
    MIIBxTCCAWugAwIB.......................qaJcwCgYIKoZIzj0EAwIwLDEQ
    MA4GA1UEChMHSG9t.......................tZUxhYiBSb290IENBMB4XDTI0
    MDQwMTA4MjgzMVoX.......................UK+moK4nZYvpNpqfvz/7m5wKU
    SAAwRQIhAIzXZMW4.......................E8UFqsCcILdXjAiA7iTluM0IU
    aIgJYVqKxXt25blH/VyBRzvNhViesfkNUQ==
    -----END CERTIFICATE-----    

CA certificates can also be stored in a ConfigMap:

global:
  caCertsConfigMap: custom-ca-certs

The ConfigMap must look like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: custom-ca-certs
data:
  ca-cert1.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----    
  ca-cert2.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----    

Custom security context

Each W&B component supports custom security context configurations of the following form:

pod:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1001
    runAsGroup: 0
    fsGroup: 1001
    fsGroupChangePolicy: Always
    seccompProfile:
      type: RuntimeDefault
container:
  securityContext:
    capabilities:
      drop:
        - ALL
    readOnlyRootFilesystem: false
    allowPrivilegeEscalation: false 

For example, to configure the application pod, add a section app to your configuration:

global:
  ...
app:
  pod:
    securityContext:
      runAsNonRoot: true
      runAsUser: 1001
      runAsGroup: 0
      fsGroup: 1001
      fsGroupChangePolicy: Always
      seccompProfile:
        type: RuntimeDefault
  container:
    securityContext:
      capabilities:
        drop:
          - ALL
      readOnlyRootFilesystem: false
      allowPrivilegeEscalation: false 

The same concept applies to console, weave, weave-trace and parquet.

Configuration Reference for W&B Operator

This section describes configuration options for W&B Kubernetes operator (wandb-controller-manager). The operator receives its configuration in the form of a YAML file.

By default, the W&B Kubernetes operator does not need a configuration file. Create a configuration file if required. For example, you might need a configuration file to specify custom certificate authorities, deploy in an air gap environment and so forth.

Find the full list of spec customization in the Helm repository.

Custom CA

A custom certificate authority (customCACerts), is a list and can take many certificates. Those certificate authorities when added only apply to the W&B Kubernetes operator (wandb-controller-manager).

customCACerts:
- |
  -----BEGIN CERTIFICATE-----
  MIIBnDCCAUKgAwIBAg.....................fucMwCgYIKoZIzj0EAwIwLDEQ
  MA4GA1UEChMHSG9tZU.....................tZUxhYiBSb290IENBMB4XDTI0
  MDQwMTA4MjgzMFoXDT.....................oNWYggsMo8O+0mWLYMAoGCCqG
  SM49BAMCA0gAMEUCIQ.....................hwuJgyQRaqMI149div72V2QIg
  P5GD+5I+02yEp58Cwxd5Bj2CvyQwTjTO4hiVl1Xd0M0=
  -----END CERTIFICATE-----  
- |
  -----BEGIN CERTIFICATE-----
  MIIBxTCCAWugAwIB.......................qaJcwCgYIKoZIzj0EAwIwLDEQ
  MA4GA1UEChMHSG9t.......................tZUxhYiBSb290IENBMB4XDTI0
  MDQwMTA4MjgzMVoX.......................UK+moK4nZYvpNpqfvz/7m5wKU
  SAAwRQIhAIzXZMW4.......................E8UFqsCcILdXjAiA7iTluM0IU
  aIgJYVqKxXt25blH/VyBRzvNhViesfkNUQ==
  -----END CERTIFICATE-----  

CA certificates can also be stored in a ConfigMap:

caCertsConfigMap: custom-ca-certs

The ConfigMap must look like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: custom-ca-certs
data:
  ca-cert1.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----    
  ca-cert2.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----    

FAQ

How to get the W&B Operator Console password

See Accessing the W&B Kubernetes Operator Management Console.

How to access the W&B Operator Console if Ingress doesn’t work

Execute the following command on a host that can reach the Kubernetes cluster:

kubectl port-forward svc/wandb-console 8082

Access the console in the browser with https://localhost:8082/ console.

See Accessing the W&B Kubernetes Operator Management Console on how to get the password (Option 2).

How to view W&B Server logs

The application pod is named wandb-app-xxx.

kubectl get pods
kubectl logs wandb-XXXXX-XXXXX

How to identify the Kubernetes ingress class

You can get the ingress class installed in your cluster by running

kubectl get ingressclass

1 - Kubernetes operator for air-gapped instances

Deploy W&B Platform with Kubernetes Operator (Airgapped)

Introduction

This guide provides step-by-step instructions to deploy the W&B Platform in air-gapped customer-managed environments.

Use an internal repository or registry to host the Helm charts and container images. Run all commands in a shell console with proper access to the Kubernetes cluster.

You could utilize similar commands in any continuous delivery tooling that you use to deploy Kubernetes applications.

Step 1: Prerequisites

Before starting, make sure your environment meets the following requirements:

  • Kubernetes version >= 1.28
  • Helm version >= 3
  • Access to an internal container registry with the required W&B images
  • Access to an internal Helm repository for W&B Helm charts

Step 2: Prepare internal container registry

Before proceeding with the deployment, you must ensure that the following container images are available in your internal container registry:

These images are critical for the successful deployment of W&B components. W&B recommends that you use WSM to prepare the container registry.

If your organization already uses an internal container registry, you can add the images to it. Otherwise, follow the proceeding section to use a called WSM to prepare the container repository.

You are responsible for tracking the Operator’s requirements and for checking for and downloading image upgrades, either by using WSM or by using your organization’s own processes.

Install WSM

Install WSM using one of these methods.

Bash

Run the Bash script directly from GitHub:

curl -sSL https://raw.githubusercontent.com/wandb/wsm/main/install.sh | bash

The script downloads the binary to the folder in which you executed the script. To move it to another folder, execute:

sudo mv wsm /usr/local/bin

GitHub

Download or clone WSM from the W&B managed wandb/wsm GitHub repository at https://github.com/wandb/wsm. See the wandb/wsm release notes for the latest release.

List images and their versions

Get an up to date list of image versions using wsm list.

wsm list

The output looks similar to the following:

:package: Starting the process to list all images required for deployment...
Operator Images:
  wandb/controller:1.16.1
W&B Images:
  wandb/local:0.62.2
  docker.io/bitnami/redis:7.2.4-debian-12-r9
  quay.io/prometheus-operator/prometheus-config-reloader:v0.67.0
  quay.io/prometheus/prometheus:v2.47.0
  otel/opentelemetry-collector-contrib:0.97.0
  wandb/console:2.13.1
Here are the images required to deploy W&B. Ensure these images are available in your internal container registry and update the values.yaml accordingly.

Download images

Download all images in the latest versions using wsm download.

wsm download

The output looks similar to the following:

Downloading operator helm chart
Downloading wandb helm chart
✓ wandb/controller:1.16.1
✓ docker.io/bitnami/redis:7.2.4-debian-12-r9
✓ otel/opentelemetry-collector-contrib:0.97.0
✓ quay.io/prometheus-operator/prometheus-config-reloader:v0.67.0
✓ wandb/console:2.13.1
✓ quay.io/prometheus/prometheus:v2.47.0

  Done! Installed 7 packages.

WSM downloads a .tgz archive for each image to the bundle directory.

Step 3: Prepare internal Helm chart repository

Along with the container images, you also must ensure that the following Helm charts are available in your internal Helm Chart repository. The WSM tool introduced in the last step can also download the Helm charts. Alternatively, download them here:

The operator chart is used to deploy the W&B Operator, which is also referred to as the Controller Manager. The platform chart is used to deploy the W&B Platform using the values configured in the custom resource definition (CRD).

Step 4: Set up Helm repository

Now, configure the Helm repository to pull the W&B Helm charts from your internal repository. Run the following commands to add and update the Helm repository:

helm repo add local-repo https://charts.yourdomain.com
helm repo update

Step 5: Install the Kubernetes operator

The W&B Kubernetes operator, also known as the controller manager, is responsible for managing the W&B platform components. To install it in an air-gapped environment, you must configure it to use your internal container registry.

To do so, you must override the default image settings to use your internal container registry and set the key airgapped: true to indicate the expected deployment type. Update the values.yaml file as shown below:

image:
  repository: registry.yourdomain.com/library/controller
  tag: 1.13.3
airgapped: true

Replace the tag with the version that is available in your internal registry.

Install the operator and the CRD:

helm upgrade --install operator wandb/operator -n wandb --create-namespace -f values.yaml

For full details about the supported values, refer to the Kubernetes operator GitHub repository.

Step 6: Configure W&B Custom Resource

After installing the W&B Kubernetes operator, you must configure the Custom Resource (CR) to point to your internal Helm repository and container registry.

This configuration ensures that the Kubernetes operators uses your internal registry and repository are when it deploys the required components of the W&B platform.

Copy this example CR to a new file named wandb.yaml.

apiVersion: apps.wandb.com/v1
kind: WeightsAndBiases
metadata:
  labels:
    app.kubernetes.io/instance: wandb
    app.kubernetes.io/name: weightsandbiases
  name: wandb
  namespace: default

spec:
  chart:
    url: http://charts.yourdomain.com
    name: operator-wandb
    version: 0.18.0

  values:
    global:
      host: https://wandb.yourdomain.com
      license: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      bucket:
        accessKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
        secretKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
        name: s3.yourdomain.com:port #Ex.: s3.yourdomain.com:9000
        path: bucket_name
        provider: s3
        region: us-east-1
      mysql:
        database: wandb
        host: mysql.home.lab
        password: password
        port: 3306
        user: wandb
      extraEnv:
        ENABLE_REGISTRY_UI: 'true'
    
    # If install: true, Helm installs a MySQL database for the deployment to use. Set to `false` to use your own external MySQL deployment.
    mysql:
      install: false

    app:
      image:
        repository: registry.yourdomain.com/local
        tag: 0.59.2

    console:
      image:
        repository: registry.yourdomain.com/console
        tag: 2.12.2

    ingress:
      annotations:
        nginx.ingress.kubernetes.io/proxy-body-size: 64m
      class: nginx

    

To deploy the W&B platform, the Kubernetes Operator uses the values from your CR to configure the operator-wandb Helm chart from your internal repository.

Replace all tags/versions with the versions that are available in your internal registry.

More information on creating the preceding configuration file can be found here.

Step 7: Deploy the W&B platform

Now that the Kubernetes operator and the CR are configured, apply the wandb.yaml configuration to deploy the W&B platform:

kubectl apply -f wandb.yaml

FAQ

Refer to the below frequently asked questions (FAQs) and troubleshooting tips during the deployment process:

There is another ingress class. Can that class be used?

Yes, you can configure your ingress class by modifying the ingress settings in values.yaml.

The certificate bundle has more than one certificate. Would that work?

You must split the certificates into multiple entries in the customCACerts section of values.yaml.

How do you prevent the Kubernetes operator from applying unattended updates. Is that possible?

You can turn off auto-updates from the W&B console. Reach out to your W&B team for any questions on the supported versions. Also, note that W&B supports platform versions released in last 6 months. W&B recommends performing periodic upgrades.

Does the deployment work if the environment has no connection to public repositories?

If your configuration sets airgapped to true, the Kubernetes operator uses only your internal resources and does not attempt to connect to public repositories.