Introduction to the Event Streaming Service¶
The Event Streaming service provides a powerful, scalable, and durable platform for handling high-volume, real-time data streams. It enables you to publish, subscribe to, store, and process streams of events as they happen, forming the backbone of event-driven architectures and real-time data pipelines.
While a traditional message queue is great for distributing tasks, an event streaming platform is designed to handle a continuous flow of data. It acts as a durable, append-only log of events, allowing multiple, independent consumer applications to read and re-read the data stream at their own pace.
Our managed Event Streaming service is built on Apache Kafka, the industry-standard, open-source platform for building real-time streaming applications. We use the powerful Strimzi operator to deploy and manage Kafka clusters on Kubernetes.
This provides a powerful, cloud-native approach to managing your event streaming
infrastructure. We manage the core Kafka cluster resource for you, ensuring it
is highly available, secure, and scalable. You, in turn, manage your topics,
users, and access control lists (ACLs) declaratively, as code, using Kubernetes
custom resources.
When to Use Event Streaming¶
Event streaming excels at handling continuous data flows and making them available to multiple systems. Common use cases include:
- Real-Time Data Pipelines: Reliably move data from source systems (like databases or applications) to target systems (like data warehouses or analytics engines) in real time.
- Event-Driven Architectures: Build decoupled systems where services communicate by reacting to streams of events, leading to more scalable and resilient applications.
- Log and Metrics Aggregation: Collect and process logs and metrics from thousands of services in a centralized and scalable manner.
- Stream Processing: Perform real-time processing on data in motion, such as filtering, aggregating, or enriching events as they are published to the stream.
Features¶
- High Throughput: Capable of handling millions of events per second, making it suitable for the most demanding real-time workloads.
- Durable and Persistent Storage: Events are written to a persistent, replicated log on disk, guaranteeing data safety and allowing you to hold event streams for long periods.
- Scalability and High Availability: Deployed as a distributed, fault-tolerant cluster, Kafka can be scaled horizontally to meet any level of demand.
- Rich Ecosystem: Kafka has a large ecosystem of connectors and stream processing libraries (like Kafka Streams and ksqlDB) that simplify the process of building streaming applications.
- Declarative Resource Management: Define your Kafka topics, users, and ACLs as Kubernetes custom resources right alongside your application code. This enables a true GitOps workflow for your event streaming infrastructure.
- Managed Cluster, User-Managed Resources: We handle the complex task of operating the Kafka cluster itself—setup, configuration, security, and upgrades. You retain full control over your application's topics and users, which you can manage declaratively.
Pricing, Legal and Support¶
Tip
For general information about pricing, legal or support concerning the platform, services or components, consult your contract or see the contact page.