Skip to main contentIBM Garage Event-Driven Reference Architecture

Monitoring IBM Event Streams on OpenShift Cloud Platform

Overview

Deploying IBM Event Streams on OpenShift Cloud Platform (OCP) as the Apache Kafka-based event backbone is a great first step in your Event-Driven Architecture implementation. However, now you must maintain that Kafka cluster and understand the intricate details of what a “healthy” cluster looks like. This tutorial will walk you through some of the initial monitoring scenarios that are available for IBM Event Streams deployed on OCP.

The raw monitoring use cases and capabilities are available from the official IBM Event Streams documentation via the links below:

This tutorial will focus on a more guided approach to understanding the foundation of Apache Kafka monitoring capabilities provided by IBM Event Streams and the IBM Cloud Pak for Integration. Upon completion of this tutorial, you can extend your own experience through the Advanced Scenarios section to adapt Kafka monitoring capabilites to your project’s needs.

Scenario Prereqs

OpenShift Container Platform

  • This deployment scenario was developed for use on the OpenShift Container Platform, with a minimum version of 4.4.

Cloud Pak for Integration

IBM Event Streams

  • This deployment scenario requires a working installation of IBM Event Streams V10.0 or greater, deployed on the Cloud Pak for Integration environment mentioned above.
  • For Cloud Pak installation guidance, you can follow the Cloud Pak Playbook installation instructions.

Git

  • We will need to clone repositories.

Java

  • Java Development Kit (JDK) v1.8+ (Java 8+)

Maven

  • The scenario uses Maven v3.6.x

Generate Event Load

This section details walking through the generation of a starter application for usage with IBM Event Streams, as documented in the official product documentation.

  1. Log into the IBM Event Streams Dashboard.

    Monitoring1
  2. Click the Try the starter application button from the Getting Started page

    Monitoring2
  3. Click Download JAR from GitHub. This will open a new window to https://github.com/ibm-messaging/kafka-java-vertx-starter/releases

    Monitoring3
    • Click the link for demo-all.jar from the latest release available. At the time of this writing, the latest version was 1.0.0.

      Monitoring4
  4. Return to the Configure & run starter application window and click Generate properties.

    Monitoring5
  5. In dialog that pops up from the right-hand side of the screen, enter the following information:

    • Starter application name: monitoring-lab-[your-initials]
    • Leave New topic selected and enter a Topic name of monitoring-lab-topic-[your-initials].
    • Click Generate and download .zip
    Monitoring6
  6. In a Terminal window, unzip the generated ZIP file from the previous window and move demo-all.jar file into the same folder.

  7. Review the extracted kafka.properties to understand how Event Streams has generated credentials and configuration information for this sample application to connect.

  8. Run the command java -Dproperties_path=./kafka.properties -jar demo-all.jar.

  9. Wait until you see the string Application started in X ms in the output and then visit the application’s user interface via http://localhost:8080.

    Monitoring7
  10. Once in the User Interface, enter a message to be contained for the Kafka record value then click Start producing.

  11. Wait a few moments until the UI updates to show some of the confirmed produced messages and offsets, then click on Start consuming on the right side of the application.

    Monitoring8
  12. In the IBM Event Streams user interface, go to the topic where you send the messages to and make sure messages have actually made it.

    Monitoring9
  13. You can leave the application running for the rest of the lab or you can do the following actions on the application

    • If you would like to stop the application from producing, you can click Stop producing.
    • If you would like to stop the application from consuming, you can click Stop consuming.
    • If you would like to stop the application entirely, you can input Control+C in the Terminal session where the application is running.

An alternative sample application can be leveraged from the official documentation to generate higher amounts of load.

Explore the preconfigured Event Streams Dashboard

This section will walk through the default dashboard and user interface available on every IBM Event Streams deployment.

  1. Log into the Event Streams Dashboard.

  2. Click the Monitoring tab from the primary navigation menu on the left hand side.

  3. From here, you can view information on messages, partitions, and replicas for the past hour, day, week, or month.

    Monitoring10
  4. Click the Topics tab from the primary navigation menu on the left hand side.

  5. Click the name of your topic that you previously created in the Generate Event Load section. This should be in the format of monitoring-lab-topic-[your-initials].

    • You are presented with a Producers page showing the number of active producers, as well as the average message size produced per second and average number of messages produced per second. You can modify the time window by changing the values in the View producers by time box. Monitoring11
    • Click the Messages tab to view all the data and metadata for events stored in the topic. You can view messages across partitions or on specific partitions, as well as jump to specific offsets or timestamps. Monitoring12
    • Click Consumer Groups to be shown the number of consumer groups that have previously registered or are currently registered as consuming from the topic. Monitoring13
    • You are able to see how many active members a consumer group has, as well as have many unconsumed partitions a topic has inside of a consumer group (also known as consumer group lag)- a key metric for driving parallelism in event-driven microservices! Monitoring14

Import Grafana Dashboards

This section will walk through the Grafana Dashboard capabilities documented in the official IBM Event Streams documentation.

  1. Apply the Grafana Dashboard for overall Kafka Health via a MonitoringDashboard custom resource:

    oc apply -f https://raw.githubusercontent.com/ibm-messaging/event-streams-operator-resources/master/grafana-dashboards/ibm-eventstreams-kafka-health-dashboard.yaml

View Grafana Dashboards

To view the newly imported Event Streams Grafana dashboard for overall Kafka Health, follow these steps:

  1. Navigate to the IBM Cloud Platform Common Services console homepage via https://cp-console.[cluster-name], click the hamburger icon in the top left and click the Monitoring in the expanded menu to open the Grafana homepage.

    Monitoring15
  2. Click the user icon in the bottom left corner to open the user profile page.

    Monitoring16
  3. In the Organizations table, find the namespace where you installed the Event Streams monitoringdashboard custom resource (most likely the eventstreams), and switch the user profile to that namespace.

    Monitoring17
  4. Hover over the Dashboards square on the left and click Manage.

  5. Click on IBM Event Streams Kafka dashboard in the Dashboard table to view the newly imported resource.

    Monitoring18
  6. Using the drop-down selectors at the top, select the following:

    • Namespace which has the running instance of your Event Streams deployment,
    • Cluster Name for the desired Event Streams cluster
    • Topic that matches desired topics for viewing (only topics that have been published to will appear in this list)
    • Broker to select individual or multiple brokers in the cluster.
    Monitoring19

Note: Not all of the metrics that Kafka uses are published to Prometheus by default. The metrics that are published are controlled by a ConfigMap. You can publish metrics by adding them to the ConfigMap.

Create an Alert

A monitoring system is only as good as the alerts it can send out, since you’re not going to be watching that Grafana dashboard all day and night! This section will walk through the creation of a quick alert rule which will automatically trigger, as well as how to view and silence that alert in the provided Alertmanager interface.

The official Event Streams documentation provides a walkthrough of selecting the desired metrics to monitor, but for our example, we will leverage the kafka_server_replicamanager_partitioncount_value metric as an indicator of topic creation (as the overall partition count will increase when a topic is first created).

  1. On the command line, create this sample rule which will fire whenever the partition count is over 50 (which is the baseline number of partitions the Event Streams system uses for its internal topic partitions). In order to do this, create prom-rule-partitions.yaml file with the following content in it:

    apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
    labels:
    component: icp-prometheus
    name: demo-partition-count
    spec:
    groups:
    - name: PartitionCount
  2. Create the alert rule via the OpenShift CLI:

    oc apply -f prom-rule-partitions.yaml
  3. You can view the creation and status of your alert via the OpenShift CLI:

    oc get PrometheusRule demo-partition-count
    oc describe PrometheusRule demo-partition-count
  4. Access the Prometheus monitoring backend that is provided in the IBM Cloud Pak Common Services via https://cp-console.[cluster-name]/prometheus.

    Monitoring20
  5. Click the Alerts button in the header.

  6. You should see your new PartitionCount rule firing and highlighted in red.

    • NOTE: If you do not see your PrometheusRule, you may need to create it in the ibm-common-services namespace depending upon your OpenShift cluster and Cloud Pak operator configuration. This can be done by supplying the -n ibm-common-services flag to the oc apply -f prom-rule-partitions.yaml command.
  7. Click on the PartitionCount alert to expand the details and see which components are triggering the alert.

    Monitoring21

Now that we have created alerts from the monitoring system, you will want a way to manage those alerts. The default Alertmanager component provides a way to manage firing alerts, notifications, and silences. Prometheus is capable of integrating with many notification systems - from Slack to PagerDuty to HipChat to common HTTP webhooks. For further information on the extensibility of Prometheus, you can view the Alerting configuration section of the official docs. For configuring the IBM Cloud Pak Common Services deployed instance of Prometheus, you can view the Configuring Alertmanager section of the official docs.

In this section of the tutorial, you will walk through the Alertmanager interface and silence the previously created alerts.

  1. Access the default Alertmanager instance via https://cp-console.apps.[cluster-name]/alertmanager/.

    • You should see the newly created PartitionCount alerts listed as firing.
    Monitoring22
  2. Click on the Info button for the first alert to see the additional context provided by the alert definition (ie there are more than 50 partitions)

    Monitoring23
  3. As alerts fire and become acknowledged, you can silence them to mark them as known, acknowledged, or resolved. To do this, you create a Silence. Click the Silence button for one of the alerts in the list.

  4. You will see a start time, a duration, and an end time by default. This gives you initial control over what you are silencing and for how long.

  5. Next, you will see a list of Name and Value pairs that are filled with the information from the alert instance you clicked on.

    Monitoring24
  6. Delete the elements in the Matchers list until only the following items are left. This will allow for a robust capture of all the PartitionCount alerts for the same Event Streams cluster.

    • alertname
    • app_kubernetes_io_instance
    • app_kubernetes_io_part_of
    • kubernetes_namespace Monitoring25
  7. Your username should already be filled in for the Creator, so enter a Comment of “Silencing demo alerts” and click Preview Alerts.

    Monitoring26
  8. Once the affected number of alerts matches the same number of PartitionCount alerts that were listed as firing in Prometheus, click Create.

  9. Clicking on the Alerts tab in the header, you will now see those alerts are silenced - meaning you acknowledged them.

    Monitoring27
  10. To make them visible again prior to the expiration of the created Silence, click on the Silences tab from the header. This page lists all the Active, Pending, and Expired silences in the system. You can view, edit, and expire any active Silence to again have the alerts show up in Alertmanager or anywhere else Prometheus is sending notifications.

    Monitoring28

Next Steps

External Monitoring Tools

IBM Event Streams supports additional monitoring capabilities with third-party monitoring tools via a connection to the clusters JMX port on the Kafka brokers.

You must first configure your IBM Event Streams instance for specific access by these external monitoring tools.

You can then follow along with the tutorials defined in the official IBM Event Streams documentation to monitor Event Streams with tools such as Datadog and Splunk.

Advanced Scenarios

As shown in this tutorial, IBM Event Streams provides a robust default set of monitoring metrics which are available to use right out of the box. However, you will most likely need to define custom metrics or extend existing metrics for use in custom dashboards or reporting processes. The following links (in order of recommended usage) discuss additional monitoring capabilities, technologies, and endpoints that are supported with IBM Event Streams to extend your custom monitoring solution as needed:

  • Kafka Exporter - You can use Event Streams to export metrics to Prometheus. These metrics are otherwise only accessible through the Kafka command line tools and allow per-topic metrics, such as consumer group lag, to be colleced.

  • JMX Exporter - You can use Event Streams to collect JMX metrics from Kafka brokers, ZooKeeper nodes, and Kafka Connect nodes, and export them to Prometheus via the Prometheus JMX Exporter.

  • JmxTrans - JmxTrans can be used to push JMX metrics from Kafka brokers to external applications or databases.

Additional Reading