Skip to content

Kubernetes Workload Clusters

Cluster

The Workload Cluster is the core of our platform offering and the environment where your applications run. A typical setup consists of multiple clusters to support a full software development lifecycle (e.g., development, staging, and production). Each cluster is a dedicated, secure, and fully managed Kubernetes environment.

This document provides a high-level overview of our workload clusters, explaining the architecture, tenancy models, and isolation boundaries. It is intended for architects and technical stakeholders who are designing solutions on the platform, as well as for anyone seeking to understand the fundamental concepts of how our clusters are structured.

The Shared Responsibility Model

The platform operates on a shared responsibility model, which clarifies which security and operational tasks are handled by our team and which are handled by you. In short: we manage the platform, and you manage your applications on it.

For a detailed breakdown of these responsibilities, please see the dedicated Shared Responsibility Model document.

Contain Base Service

Every workload cluster is built upon our Contain Base service. This is not just a plain Kubernetes installation; it's our curated, hardened, and pre-integrated set of core components that provide a production-ready environment out of the box. It includes solutions for networking, DNS, certificate management, security policy, and more.

This "batteries-included" approach ensures that every cluster is consistent, secure, and ready to run your workloads without extensive setup or configuration on your part. To learn more, see the Platform Components documentation.

Core Features

The Contain Base service comes with a wide range of built-in capabilities, delivered by our integrated set of core components:

  • Fully Managed Control Plane & Nodes: We manage the entire lifecycle of the Kubernetes control plane and the underlying operating systems on the worker nodes, including security patching, upgrades, and health monitoring, freeing you from infrastructure management.
  • Automated Security & Governance: Enforce custom policies, manage TLS certificates automatically, and securely sync secrets from external stores. A default-deny network model is enforced to block all cross-namespace traffic unless explicitly allowed.
  • Secure & Automated Networking: A high-performance Ingress controller manages external access to your services, while public DNS records are created and managed automatically for your applications.
  • GitOps & Continuous Delivery: The cluster's state is continuously reconciled with your configuration stored in Git, automating deployments and infrastructure management. Namespace creation is also automated to ensure consistency and security.
  • Operations & Resilience: Robust backup and restore capabilities for your cluster resources and persistent volumes ensure business continuity. Basic resource monitoring enables workload autoscaling.
  • Advanced CNI Networking: The platform utilizes industry-leading CNIs like Cilium and Calico to provide secure and high-performance pod networking, including advanced features like eBPF-based observability and security.
  • Platform Observability: The service is deeply integrated with our central platform observability capabilities, providing a unified view of the health and performance of your clusters and applications.
  • Application Observability (Optional): For deeper insights, an optional, paid service can be added to provide a complete telemetry solution (metrics, logs, and traces) for your applications.

Tenancy and Isolation Model

We provide strong isolation at the cluster level by default, while still leveraging the efficiency of shared infrastructure.

Dedicated Clusters on Shared Infrastructure

Each dedicated Workload Cluster has its own Kubernetes control plane and worker nodes, logically isolating your workloads from other customers. This provides you with your own API server, nodes, and namespaces in a secure and separate environment.

These dedicated clusters run on a multi-tenant infrastructure layer where the underlying physical servers, networking, and storage are shared. This model provides the best balance of strong security isolation and cost-efficiency.

Dedicated Clusters on Shared InfrastructureShared InfrastructureNetwork SegmentNetwork SegmentNetwork SegmentManagement PlaneYour clusterYour other clusterOther customer's clusterShared Managed Services
Hold "Alt" / "Option" to enable pan & zoom

Dedicated Hardware

For organizations with stringent compliance or performance requirements, we offer fully dedicated, physically isolated hardware. This provides the highest level of isolation by ensuring no other customer's workloads run on the same physical machines as yours.

Contact us for more information.

Intra-Cluster Isolation with Namespaces

While clusters provide high-level isolation, Kubernetes Namespaces are the primary tool for logical isolation within a single cluster. They allow multiple teams, projects, or applications to share a cluster safely.

To ensure this isolation is secure and consistent, the Namespace Provisioning service automates the creation of new namespaces. This service is powered by a dedicated Kubernetes operator.

When you create a namespace using this service, it is automatically configured with a set of secure defaults, including:

  • Default Network Policies to enforce a default-deny network model.
  • Resource Quotas to ensure fair resource allocation.
  • Standard RBAC Roles for access control.

This automation ensures that every project gets a secure, compliant, and ready-to-use environment from the moment it is created, eliminating manual setup and configuration errors.

Intra-Cluster IsolationGit RepositoryWorkload ClusterNamespace OperatorNew NamespaceDefault Network PoliciesResource QuotasStandard RBAC Roles 1. Pulls configuration (GitOps)2. Creates and configures
Hold "Alt" / "Option" to enable pan & zoom

The Role of the Management Plane

Your Workload Cluster is not an island; it is supported by a centralized Management Plane. This plane provides a set of essential, shared services that your cluster relies on, such as:

  • A secure container registry
  • An identity provider (IdP) for user authentication
  • Secure storage for cluster backups (S3-compatible)
  • Git repositories for GitOps automation
  • Secrets Store (OpenBAO) for secure secrets management
  • DNS service

While the Management Plane is typically a central, shared service, we can also deploy a dedicated Management Cluster for fully isolated environments (e.g., air-gapped datacenters). This smaller, local cluster provides all necessary management services without needing external connectivity.

Detailed Service Documentation

For a detailed technical overview of the components, included services, and integration capabilities, please see the complete documentation for the Contain Base service.