Kubernetes operator for air-gapped instances
Deploy W&B Platform with Kubernetes Operator (Airgapped)
19 minute read
Use the W&B Kubernetes Operator to simplify deploying, administering, troubleshooting, and scaling your W&B Server deployments on Kubernetes. You can think of the operator as a smart assistant for your W&B instance.
The W&B Server architecture and design continuously evolves to expand AI developer tooling capabilities, and to provide appropriate primitives for high performance, better scalability, and easier administration. That evolution applies to the compute services, relevant storage and the connectivity between them. To help facilitate continuous updates and improvements across deployment types, W&B users a Kubernetes operator.
For more information about Kubernetes operators, see Operator pattern in the Kubernetes documentation.
Historically, the W&B application was deployed as a single deployment and pod within a Kubernetes Cluster or a single Docker container. W&B has, and continues to recommend, to externalize the Database and Object Store. Externalizing the Database and Object store decouples the application’s state.
As the application grew, the need to evolve from a monolithic container to a distributed system (microservices) was apparent. This change facilitates backend logic handling and seamlessly introduces built-in Kubernetes infrastructure capabilities. Distributed systems also supports deploying new services essential for additional features that W&B relies on.
Before 2024, any Kubernetes-related change required manually updating the terraform-kubernetes-wandb Terraform module. Updating the Terraform module ensures compatibility across cloud providers, configuring necessary Terraform variables, and executing a Terraform apply for each backend or Kubernetes-level change.
This process was not scalable since W&B Support had to assist each customer with upgrading their Terraform module.
The solution was to implement an operator that connects to a central deploy.wandb.ai server to request the latest specification changes for a given release channel and apply them. Updates are received as long as the license is valid. Helm is used as both the deployment mechanism for the W&B operator and the means for the operator to handle all configuration templating of the W&B Kubernetes stack, Helm-ception.
You can install the operator with helm or from the source. See charts/operator for detailed instructions.
The installation process creates a deployment called controller-manager
and uses a custom resource definition named weightsandbiases.apps.wandb.com
(shortName: wandb
), that takes a single spec
and applies it to the cluster:
The controller-manager
installs charts/operator-wandb based on the spec of the custom resource, release channel, and a user defined config. The configuration specification hierarchy enables maximum configuration flexibility at the user end and enables W&B to release new images, configurations, features, and Helm updates automatically.
Refer to the configuration specification hierarchy and configuration reference for configuration options.
Configuration specifications follow a hierarchical model where higher-level specifications override lower-level ones. Here’s how it works:
This hierarchical model ensures that configurations are flexible and customizable to meet varying needs while maintaining a manageable and systematic approach to upgrades and changes.
Satisfy the following requirements to deploy W&B with the W&B Kubernetes operator:
Refer to the reference architecture. In addition, obtain a valid W&B Server license.
See this guide for a detailed explanation on how to set up and configure a self-managed installation.
Depending on the installation method, you might need to meet the following requirements:
See the Deploy W&B in airgapped environment with Kubernetes tutorial on how to install the W&B Kubernetes Operator in an airgapped environment.
This section describes different ways to deploy the W&B Kubernetes operator.
Choose one of the following:
W&B provides a Helm Chart to deploy the W&B Kubernetes operator to a Kubernetes cluster. This approach allows you to deploy W&B Server with Helm CLI or a continuous delivery tool like ArgoCD. Make sure that the above mentioned requirements are in place.
Follow those steps to install the W&B Kubernetes Operator with Helm CLI:
Configure the W&B operator custom resource to trigger the W&B Server installation. Copy this example configuration to a file named operator.yaml
, so that you can customioze your W&B deployment. Refer to Configuration Reference.
Start the Operator with your custom configuration so that it can install and configure the W&B Server application.
Wait until the deployment completes. This takes a few minutes.
To verify the installation using the web UI, create the first admin user account, then follow the verification steps outlined in Verify the installation.
This method allows for customized deployments tailored to specific requirements, leveraging Terraform’s infrastructure-as-code approach for consistency and repeatability. The official W&B Helm-based Terraform Module is located here.
The following code can be used as a starting point and includes all necessary configuration options for a production grade deployment.
Note that the configuration options are the same as described in Configuration Reference, but that the syntax has to follow the HashiCorp Configuration Language (HCL). The Terraform module creates the W&B custom resource definition (CRD).
To see how W&B&Biases themselves use the Helm Terraform module to deploy “Dedicated cloud” installations for customers, follow those links:
W&B provides a set of Terraform Modules for AWS, GCP and Azure. Those modules deploy entire infrastructures including Kubernetes clusters, load balancers, MySQL databases and so on as well as the W&B Server application. The W&B Kubernetes Operator is already pre-baked with those official W&B cloud-specific Terraform Modules with the following versions:
Terraform Registry | Source Code | Version |
---|---|---|
AWS | https://github.com/wandb/terraform-aws-wandb | v4.0.0+ |
Azure | https://github.com/wandb/terraform-azurerm-wandb | v2.0.0+ |
GCP | https://github.com/wandb/terraform-google-wandb | v2.0.0+ |
This integration ensures that W&B Kubernetes Operator is ready to use for your instance with minimal setup, providing a streamlined path to deploying and managing W&B Server in your cloud environment.
For a detailed description on how to use these modules, refer to this section to self-managed installations section in the docs.
To verify the installation, W&B recommends using the W&B CLI. The verify command executes several tests that verify all components and configurations.
Follow these steps to verify the installation:
Install the W&B CLI:
Log in to W&B:
For example:
Verify the installation:
A successful installation and fully working W&B deployment shows the following output:
The W&B Kubernetes operator comes with a management console. It is located at ${HOST_URI}/console
, for example https://wandb.company-name.com/
console.
There are two ways to log in to the management console:
Open the W&B application in the browser and login. Log in to the W&B application with ${HOST_URI}/
, for example https://wandb.company-name.com/
Access the console. Click on the icon in the top right corner and then click System console. Only users with admin privileges can see the System console entry.
This section describes how to update the W&B Kubernetes operator.
Copy and paste the code snippets below into your terminal.
First, update the repo with helm repo update
:
Next, update the Helm chart with helm upgrade
:
You no longer need to update W&B Server application if you use the W&B Kubernetes operator.
The operator automatically updates your W&B Server application when a new version of the software of W&B is released.
The proceeding section describe how to migrate from self-managing your own W&B Server installation to using the W&B Operator to do this for you. The migration process depends on how you installed W&B Server:
For a detailed description of the migration process, continue here.
Reach out to Customer Support or your W&B team if you have any questions or need assistance.
Reach out to Customer Support or your W&B team if you have any questions or need assistance.
Follow these steps to migrate to the Operator-based Helm chart:
Get the current W&B configuration. If W&B was deployed with an non-operator-based version of the Helm chart, export the values like this:
If W&B was deployed with Kubernetes manifests, export the values like this:
You now have all the configuration values you need for the next step.
Create a file called operator.yaml
. Follow the format described in the Configuration Reference. Use the values from step 1.
Scale the current deployment to 0 pods. This step is stops the current deployment.
Update the Helm chart repo:
Install the new Helm chart:
Configure the new helm chart and trigger W&B application deployment. Apply the new configuration.
The deployment takes a few minutes to complete.
Verify the installation. Make sure that everything works by following the steps in Verify the installation.
Remove to old installation. Uninstall the old helm chart or delete the resources that were created with manifests.
Follow these steps to migrate to the Operator-based Helm chart:
This section describes the configuration options for W&B Server application. The application receives its configuration as custom resource definition named WeightsAndBiases. Some configuration options are exposed with the below configuration, some need to be set as environment variables.
The documentation has two lists of environment variables: basic and advanced. Only use environment variables if the configuration option that you need are not exposed using Helm Chart.
The W&B Server application configuration file for a production deployment requires the following contents. This YAML file defines the desired state of your W&B deployment, including the version, environment variables, external resources like databases, and other necessary settings.
Find the full set of values in the W&B Helm repository, and change only those values you need to override.
This is an example configuration that uses GCP Kubernetes with GCP Ingress and GCS (GCP Object storage):
AWS
GCP
Azure
Other providers (Minio, Ceph, etc.)
For other S3 compatible providers, set the bucket configuration as follows:
For S3-compatible storage hosted outside of AWS, kmsKey
must be null
.
To reference accessKey
and secretKey
from a secret:
To reference the password
from a secret:
To reference the license
from a secret:
To identify the ingress class, see this FAQ entry.
Without TLS
With TLS
Create a secret that contains the certificate
Reference the secret in the ingress configuration
In case of Nginx you might have to add the following annotation:
ingress:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 64m
Specify custom Kubernetes service accounts to run the W&B pods.
The following snippet creates a service account as part of the deployment with the specified name:
The subsystems “app” and “parquet” run under the specified service account. The other subsystems run under the default service account.
If the service account already exists on the cluster, set create: false
:
You can specify service accounts on different subsystems such as app, parquet, console, and others:
The service accounts can be different between the subsystems:
To reference the password
from a secret:
Reference it in below configuration:
Without TLS
With TLS
The LDAP TLS cert configuration requires a config map pre-created with the certificate content.
To create the config map you can use the following command:
And use the config map in the YAML like the example below
authMethod
is optional.
customCACerts
is a list and can take many certificates. Certificate authorities specified in customCACerts
only apply to the W&B Server application.
CA certificates can also be stored in a ConfigMap:
The ConfigMap must look like this:
.crt
(for example, my-cert.crt
or ca-cert1.crt
). This naming convention is required for update-ca-certificates
to parse and add each certificate to the system CA store.Each W&B component supports custom security context configurations of the following form:
runAsGroup:
is 0
. Any other value is an error.For example, to configure the application pod, add a section app
to your configuration:
The same concept applies to console
, weave
, weave-trace
and parquet
.
This section describes configuration options for W&B Kubernetes operator (wandb-controller-manager
). The operator receives its configuration in the form of a YAML file.
By default, the W&B Kubernetes operator does not need a configuration file. Create a configuration file if required. For example, you might need a configuration file to specify custom certificate authorities, deploy in an air gap environment and so forth.
Find the full list of spec customization in the Helm repository.
A custom certificate authority (customCACerts
), is a list and can take many certificates. Those certificate authorities when added only apply to the W&B Kubernetes operator (wandb-controller-manager
).
CA certificates can also be stored in a ConfigMap:
The ConfigMap must look like this:
.crt
(e.g., my-cert.crt
or ca-cert1.crt
). This naming convention is required for update-ca-certificates
to parse and add each certificate to the system CA store.wandb-app
: the core of W&B, including the GraphQL API and frontend application. It powers most of our platform’s functionality.wandb-console
: the administration console, accessed via /console
.wandb-otel
: the OpenTelemetry agent, which collects metrics and logs from resources at the Kubernetes layer for display in the administration console.wandb-prometheus
: the Prometheus server, which captures metrics from various components for display in the administration console.wandb-parquet
: a backend microservice separate from the wandb-app
pod that exports database data to object storage in Parquet format.wandb-weave
: another backend microservice that loads query tables in the UI and supports various core app features.wandb-weave-trace
: a framework for tracking, experimenting with, evaluating, deploying, and improving LLM-based applications. The framework is accessed via the wandb-app
pod.See Accessing the W&B Kubernetes Operator Management Console.
Execute the following command on a host that can reach the Kubernetes cluster:
Access the console in the browser with https://localhost:8082/
console.
See Accessing the W&B Kubernetes Operator Management Console on how to get the password (Option 2).
The application pod is named wandb-app-xxx.
You can get the ingress class installed in your cluster by running
Deploy W&B Platform with Kubernetes Operator (Airgapped)
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.