Skip to content

Platform Infrastructure

Server Status

The reliability, security, and performance of the Contain Platform are built upon a robust and flexible infrastructure foundation. This document provides an overview of the infrastructure layer that supports your Workload Clusters, our design principles, and the different deployment models we offer to meet your specific needs.

Our infrastructure philosophy is built on four key principles:

  • Security by Design

    We build in security at every layer, from the physical hardware and network to the virtualized resources.

  • High Availability

    The infrastructure is engineered to be resilient to component failures, ensuring your applications remain available.

  • Deployment Flexibility

    We provide multiple deployment models to accommodate your unique security, compliance, and operational requirements.

  • Operational Excellence

    The infrastructure is designed for streamlined, automated management, reducing operational overhead and human error.

Deployment Models

The Contain Platform is designed to run in a variety of environments, giving you the flexibility to choose the model that best aligns with your business strategy.

  • Datacenters

    We can host and manage the platform entirely within our own secure, high-availability datacenters in Denmark.

  • Public Cloud

    We can deploy and manage the platform within your organization's own public cloud subscription.

  • Private Cloud

    For organizations that require data to remain within their own facilities, we can deploy the platform on your own on-premises hardware.

  • Air-Gapped Environments

    We fully support deploying the platform in high-security, air-gapped environments with no internet connectivity, ensuring complete data isolation.

Network Architecture and Security

InternetFirewall / Security GroupFirewall / Security GroupFirewallIsolated Network SegmentIsolated Network SegmentManagement PlaneWorkload ClusterControl PlaneWorker Nodes ManagesApplication Traffic Secure Private Connection
Hold "Alt" / "Option" to enable pan & zoom

We enforce strict network security and isolation to protect your workloads. This is a foundational aspect of our multi-tenant architecture.

Network Segmentation

Every workload cluster is provisioned within its own completely isolated network segment (e.g. VPC, VNet, VLAN, VXLAN).

This ensures that there is no direct network path between the clusters of different customers. All traffic is denied by default, and only explicitly allowed traffic can enter or leave a cluster's network.

Firewalls and Security Groups

Each network segment is protected by a layer of stateful firewalls or security groups (depending on the environment). These rules are configured with a principle of least privilege, only allowing traffic to the specific ports and protocols required for the Kubernetes API, application ingress, and our management agents.

Private Networking

In Our Datacenters

All communication between your workload clusters and our central Management Plane is conducted over secure, private network connections. Your cluster's control plane is not reachable from the public internet unless you explicitly want it to be.

In Public Cloud and Other Datacenters

Communication relies on the capabilities the infrastructure provides. Most Public Cloud providers offer private networking options such as VPC peering, VPNs, or private links. For other datacenters, private networking can be achieved through dedicated connections like MPLS or private fiber links.

High Availability and Disaster Recovery

The infrastructure is designed to be highly available and resilient to failures at both the component and site level.

High Availability (HA)

To protect against local component failures, the infrastructure is designed with redundancy at every layer.

  • Cluster Node Placement: The control plane and worker nodes for your clusters are automatically spread across different physical racks, Availability Zones (AZs) or Availability Cells (ACs). This ensures that the failure of a single rack, server, top-of-rack switch, or even a datacenter will not cause a complete cluster outage.
  • Redundant Infrastructure: In our datacenters, we utilize redundant power, cooling, and network uplinks to ensure the stability of the physical environment.

Power RedundancyPower FeedsDatacentersPower SourcePower SourceDatacenter 1Datacenter 2Datacenter 3
Hold "Alt" / "Option" to enable pan & zoom

Network RedundancyNetwork FeedsDatacentersISPISPCore RoutersFirewallsCore SwitchesDistribution SwitchesServers
Hold "Alt" / "Option" to enable pan & zoom

Node PlacementRegionAvailability Zone 1Availability Zone 2Availability Zone 3Control Plane NodeWorker NodeWorker NodeWorker NodeControl Plane NodeWorker NodeWorker NodeWorker NodeControl Plane NodeWorker NodeWorker NodeWorker Node
Hold "Alt" / "Option" to enable pan & zoom

Disaster Recovery (DR)

For protection against a complete site or region failure, the platform leverages the backup and recovery mechanisms detailed in the architecture documentation. By regularly backing up git repositories, container images, and data volumes to a geographically separate location, we can restore your entire environment in a new region in a disaster scenario.

We follow the industry-standard 3-2-1 backup strategy to ensure data durability. This means we maintain at least three copies of your data, on two different storage media, with one copy located off-site. This layered approach provides strong protection against a wide range of failure scenarios, from disk corruption to a full datacenter outage.

Hardware and Performance

In our own datacenters, we use enterprise-grade hardware to provide a stable and performant foundation for your applications.

  • Compute: Your clusters run on high-performance servers with modern processors to ensure your applications have the CPU resources they need.
  • Storage: We utilize high-performance, redundant storage solutions for Kubernetes Persistent Volumes, providing reliable and fast disk I/O for your stateful applications.
  • Dedicated Hardware: As mentioned in our Workload Clusters documentation, we offer the option of running your clusters on physically isolated, dedicated hardware for customers with the most stringent performance or compliance requirements.