Getting Started with Event Streaming¶
This guide will walk you through the process of configuring your managed Apache Kafka cluster. Following the platform's GitOps approach, you will define your Kafka topics, users, and access control lists (ACLs) as declarative Kubernetes resources using the Strimzi operator.
The Service Model¶
Our Event Streaming service uses the Strimzi operator for Kafka. This creates a shared responsibility model:
- We Manage the Cluster: We provision and manage the core
Kafkaresource for you. This includes handling high availability, upgrades, and the underlying infrastructure. - You Manage the Resources: You have full control over your Kafka resources. You define all the necessary components like topics and users as custom resources in your GitOps repository.
Prerequisites¶
Before you begin, you need a managed Kafka cluster instance. Please contact
us to have one provisioned in your namespace. We will provide you with
the name of the Kafka cluster resource you should reference in your resource
definitions.
Step 1: Create a Topic¶
The first step is to define a topic where your messages will be stored. You do
this by creating a KafkaTopic resource.
Create the following manifest:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: my-app-orders
namespace: your-application-namespace
labels:
# This label is required to link the topic to your Kafka cluster
strimzi.io/cluster: my-kafka-cluster
spec:
partitions: 3
replicas: 3
config:
retention.ms: 604800000 # 7 days
metadata.labels."strimzi.io/cluster": This label is mandatory. It tells the Strimzi operator which Kafka cluster this topic belongs to. Use the cluster name we provided.spec.partitions: Defines how many partitions the topic will have. More partitions allow for greater parallelism.spec.replicas: Defines the replication factor for the topic's data, ensuring durability. This should typically be 3.
Step 2: Create a User¶
Next, create a user for your application to authenticate with the Kafka cluster.
Strimzi will automatically create a corresponding Kubernetes Secret containing
the user's credentials.
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: my-app-user
namespace: your-application-namespace
labels:
# This label is required to link the user to your Kafka cluster
strimzi.io/cluster: my-kafka-cluster
spec:
authentication:
type: scram-sha-512
This manifest creates a user that authenticates using SCRAM-SHA-512. The
operator will generate a Secret named my-app-user containing the password.
Step 3: Grant Permissions (ACLs)¶
By default, a new user has no permissions. You must explicitly grant access to
topics using KafkaAcl resources.
Let's grant our new user permission to write to and read from the
my-app-orders topic.
# Allow the user to WRITE to the topic
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaAcl
metadata:
name: my-app-user-write-acl
namespace: your-application-namespace
labels:
strimzi.io/cluster: my-kafka-cluster
spec:
user: my-app-user
type: allow
operation: Write
resource:
type: topic
name: my-app-orders
patternType: literal
---
# Allow the user to READ from the topic
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaAcl
metadata:
name: my-app-user-read-acl
namespace: your-application-namespace
labels:
strimzi.io/cluster: my-kafka-cluster
spec:
user: my-app-user
type: allow
operation: Read
resource:
type: topic
name: my-app-orders
patternType: literal
Step 4: Connect Your Application¶
Finally, configure your application's Deployment to connect to Kafka. Your
application should get its connection details from environment variables, which
are populated from the Secret created in Step 2.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-producer-app
namespace: your-application-namespace
spec:
# ... other deployment specs
template:
# ... other pod specs
spec:
containers:
- name: producer-container
image: your-image/producer
env:
# The bootstrap server DNS name follows this pattern:
# <cluster-name>-kafka-bootstrap.<namespace>.svc.cluster.local:9093
- name: KAFKA_BOOTSTRAP_SERVERS
value: "my-kafka-cluster-kafka-bootstrap.your-application-namespace.svc.cluster.local:9093"
- name: KAFKA_USERNAME
value: "my-app-user"
- name: KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-user # The Secret created by the KafkaUser
key: password
Client Configuration
Your Kafka client must be configured to use SASL SCRAM-SHA-512 for
authentication and TLS for encryption to connect to the bootstrap server on
port 9093.
Once you commit these resources to your GitOps repository, the Strimzi operator will create the topic and user. Your application will then be able to connect and start producing or consuming messages.
Next Steps¶
You have now configured a basic event streaming setup. Strimzi can manage many
other resources, including KafkaConnector, KafkaBridge, and more.
- To learn more, explore the official Strimzi Operator documentation.