Getting Started with Application Observability¶
This guide will walk you through the essential steps to enable Application Observability for your applications. By following these steps, you will activate the collection of your application's logs, metrics, and traces and learn how to verify that your data is arriving in your dedicated Grafana instance.
Prerequisites¶
Before you begin, please ensure you have the following:
- An Application Deployed: You should have an application running in your Kubernetes namespace. For information on getting access to your cluster, see the Contain Base getting started guide.
- An Instrumented Application: Your application must be capable of providing
telemetry data.
- Logs: The application should log to
stdoutandstderr. - Metrics: The application should expose a Prometheus-style
/metricsendpoint. Using OpenTelemetry is highly recommended. - Traces: If trace collection is desired, the application must be configured to export traces in either Jaeger or OpenTelemetry (OTLP) format.
- Logs: The application should log to
Step 1: Enable Collection for Your Namespace¶
Data collection is enabled on an opt-in basis for each namespace. To activate the service, you must add a specific label to your namespace's metadata.
Manifest Example (Recommended)¶
Update your namespace manifest to include the
application-observability.netic.dk/enabled: "true" label.
apiVersion: v1
kind: Namespace
metadata:
name: your-namespace
labels:
application-observability.netic.dk/enabled: "true"
Kubectl Example¶
Alternatively, you can apply this label using kubectl:
Once the label is applied, the OpenTelemetry collectors will automatically begin collecting telemetry data from all pods within that namespace.
Step 2: Sending Logs¶
Log collection is fully automatic. Once your namespace is enabled, the
observability collectors will capture all logs written to stdout and stderr
by your application's containers.
Note
Each line from stdout is captured and indexed as a separate log entry.
This is important to remember when analyzing multi-line output, such as
stack traces.
Step 3: Sending Metrics¶
To have your application's metrics collected, you need to add annotations to your pods to tell the collector how to scrape them. The collector will automatically discover and scrape pods that have these annotations.
Add the following annotations to your Deployment, StatefulSet, or Pod
metadata:
prometheus.io/scrape: "true": Enables scraping for this pod.prometheus.io/path: "/metrics": The URL path of the metrics endpoint. (Defaults to/metrics).prometheus.io/port: "8080": The port number of the metrics endpoint.
Manifest Example¶
Here is an example of a Deployment with the required annotations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-application
namespace: your-application-namespace
spec:
# ... other deployment specs
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "8080"
labels:
app: your-application
spec:
containers:
- name: your-application-container
image: your-image
ports:
- containerPort: 8080
name: http
Step 4: Sending Traces¶
To collect and forward your application's traces, you can use one of two modes: Sidecar or Gateway. The best choice depends on your application's architecture and configuration needs.
Which Mode Should I Choose?¶
| Feature | Sidecar Mode | Gateway Mode |
|---|---|---|
| Resource Usage | Higher (one agent per app pod) | Lower (centralized agents) |
| Configuration | Simple pod annotation | Requires application-level changes |
| Best For | Quick setup, resource isolation. | Resource efficiency, custom endpoints. |
Sidecar Mode¶
In this mode, a dedicated OpenTelemetry agent is automatically deployed as a
"sidecar" container within each of your application pods. Your application
should be configured to send traces to this local agent at localhost.
To enable the Sidecar mode, add the following annotation to your pod's metadata
in your Deployment, StatefulSet, or other workload manifest.
Manifest Example¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
sidecar.opentelemetry.io/inject: "true"
spec:
containers:
- name: my-app-container
# Your container spec here...
Your application should then be configured to send traces to the appropriate
localhost port listed in the table below.
Gateway Mode¶
In this mode, your application instances send traces directly to a centralized telemetry collection service running within the cluster. This is more resource-efficient as it does not require a sidecar for every pod.
To use Gateway mode, configure your application's telemetry exporter to send traces directly to the gateway's service endpoint:
oaas-observability-collector.netic-observability-system.svc.cluster.local
Supported Formats & Endpoints¶
| Protocol | Format | Sidecar Endpoint | Gateway Endpoint |
|---|---|---|---|
| Jaeger | gRPC | localhost:14250 |
oaas-observability-collector.netic-observability-system.svc.cluster.local:14250 |
| OTLP | gRPC | localhost:4317 |
oaas-observability-collector.netic-observability-system.svc.cluster.local:4317 |
Step 5: Verifying Your Data in Grafana¶
Once your application is configured, you can view your telemetry data in your dedicated Grafana instance.
- Navigate to your organization's Grafana URL, which follows the pattern
https://<your-org>.observability.netic.dk. - From the main menu, click the Explore icon.
- In the Explore view, use the dropdown at the top of the page to switch
between data sources:
- Select the Loki data source to query your logs.
- Select the Mimir data source to query your Prometheus metrics.
- Select the Tempo data source to search for your traces.
If your setup is correct, you should see data from your application appearing within a few minutes.
Next Steps¶
You have now successfully configured your application for observability!
- Explore your data in Grafana to start building queries and dashboards.
- Review the Usage Dashboard in Grafana to monitor your data ingestion and associated costs.