Skip to content

Networking in Kubernetes

Kubernetes provides a networking model that allows your applications to communicate with each other and with the outside world. Understanding a few core networking concepts is essential for deploying, exposing, and securing your applications on the Contain Platform.

This guide provides a high-level overview of the most important networking resources.

Further Reading

For a more comprehensive deep-dive into this topic, please refer to the official Kubernetes Networking documentation.

Services

Problem: Pods in Kubernetes are ephemeral; they can be created and destroyed, and their IP addresses change. If you have a set of backend pods, how can frontend pods connect to them reliably?

Solution: A Service is a Kubernetes resource that provides a stable endpoint (a single, unchanging IP address and DNS name) for a set of pods. It acts as an internal load balancer, distributing traffic to all the healthy pods that match its selector.

The most common service type is a ClusterIP, which exposes the service on an internal IP that is only reachable from within the cluster.

Example

This Service targets all pods with the label app: my-backend and exposes their port 8080 on its own port 80. Other pods in the cluster can now reliably connect to this service using the DNS name my-backend-service.

apiVersion: v1
kind: Service
metadata:
  name: my-backend-service
spec:
  selector:
    app: my-backend # Selects pods with this label
  ports:
    - protocol: TCP
      port: 80 # The port the service is available on
      targetPort: 8080 # The port the pods are listening on

The LoadBalancer Service Type

For non-HTTP(S) traffic, or when you need to expose a service directly to the internet with its own IP address, you can use the type: LoadBalancer.

When you create a LoadBalancer service, the platform will automatically provision an external load balancer from the underlying infrastructure provider (e.g., an ELB on AWS or an Azure Load Balancer). This external load balancer will have a public IP address and will forward traffic to your service.

Further Reading

The ClusterIP and LoadBalancer are two of the most common service types. To learn about others, like NodePort, see the official Kubernetes documentation on Publishing Services (ServiceTypes).

Example

This example exposes a TCP service on port 5432 to the internet.

apiVersion: v1
kind: Service
metadata:
  name: my-database-service
spec:
  type: LoadBalancer
  selector:
    app: my-database
  ports:
    - protocol: TCP
      port: 5432
      targetPort: 5432

Ingress

Problem: A Service provides a stable internal IP. How do you securely expose your application to the internet or other external users?

Solution: An Ingress is a Kubernetes resource that manages external access to the services in a cluster, typically for HTTP and HTTPS traffic. It provides routing rules that define which inbound traffic should be directed to which service.

An Ingress is not a service itself; it requires an Ingress controller (like the Service Proxy) to be running in the cluster to function.

Example

This Ingress resource routes external traffic. Any request to myapp.example.com with a path starting with /api will be sent to the my-backend-service on port 80.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-application-ingress
spec:
  rules:
  - host: "myapp.example.com"
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: my-backend-service
            port:
              number: 80

Gateway API (The Evolution of Ingress)

Problem: The Ingress resource, while useful, is basic and has limitations. It combines infrastructure concerns (like which load balancer to use) with application routing rules, which can be inflexible and confusing for larger teams.

Solution: The Gateway API is the next-generation API for managing traffic in Kubernetes. It is a more powerful, flexible, and role-oriented replacement for Ingress. The core idea is to separate the concerns of the cluster operator (who manages the infrastructure) from the application developer (who manages the application routing).

This separation is achieved through two main resources:

  • Gateway: A resource defined by the cluster operator that requests a load balancer and defines where traffic can enter the cluster.
  • HTTPRoute: A resource defined by the application developer that attaches to a Gateway and defines the routing rules for their specific application.

The Service Proxy service on the Contain Platform supports the Gateway API.

Example

1. The Cluster Operator defines a Gateway

This Gateway requests a load balancer and specifies that it will listen on port 443 for HTTPS traffic from any hostname (*.example.com).

# gateway-example.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: my-gateway
spec:
  gatewayClassName: contour
  listeners:
  - name: https
    protocol: HTTPS
    port: 443
    hostname: "*.example.com"
    tls:
      mode: Terminate
      certificateRefs:
      - name: my-tls-cert

2. The Application Developer defines an HTTPRoute

This HTTPRoute attaches to my-gateway and routes traffic for app.example.com to the my-frontend-service. The developer doesn't need to know or care about the underlying load balancer details.

# httproute-example.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-frontend-route
spec:
  parentRefs:
  - name: my-gateway
  hostnames: ["app.example.com"]
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: my-frontend-service
      port: 80

This role-based separation makes managing complex routing scenarios much safer and more scalable.

Network Policies

Problem: By default, the network in a Kubernetes cluster is "flat," meaning any pod can communicate with any other pod. How do you enforce the principle of least privilege and restrict communication between applications?

Solution: A NetworkPolicy allows you to control the traffic flow at the IP address or port level. You can define rules that specify which pods are allowed to communicate with each other and with other network endpoints.

Network Policies are a powerful tool for creating secure, multi-tenant environments by isolating applications from one another.

Example

This NetworkPolicy is applied to all pods with the label app: my-backend. It defines an ingress (inbound) rule that only allows traffic on TCP port 8080 from pods that have the label app: my-frontend. All other inbound traffic to the backend pods will be blocked.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-allow-from-frontend
spec:
  podSelector:
    matchLabels:
      app: my-backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: my-frontend
    ports:
    - protocol: TCP
      port: 8080

Default Network Policies

The Contain Platform enforces a default-deny security model for network traffic. This means that, by default, all pods are isolated and cannot communicate with each other unless a NetworkPolicy explicitly allows it.