Skip to main contentIBM Garage Event-Driven Reference Architecture

Common pre-requisites

IBM Cloud Shell

Here we are going to set up our IBM Cloud Shell with all the tools required to carry out this lab.

Start your IBM Cloud Shell by pointing your browser to https://cloud.ibm.com/shell

shell

IBM Cloud Pak CLI

Cloudctl is a command line tool to manage Container Application Software for Enterprises (CASEs). This CLI will allow us to manage Cloud Pak related components as well as software, like IBM Event Streams, installed through any IBM Cloud Pak.

In order to install it, execute the following commands in your IBM Cloud Shell:

  1. Download the IBM Cloud Pak CLI - curl -L https://github.com/IBM/cloud-pak-cli/releases/latest/download/cloudctl-linux-amd64.tar.gz -o cloudctl-linux-amd64.tar.gz
  2. Untar it - tar -xvf cloudctl-linux-amd64.tar.gz
  3. Rename it for ease of use - mv cloudctl-linux-amd64 cloudctl
  4. Include it to the PATH environment variable - export PATH=$PATH:$PWD
  5. Make sure your IBM Cloud Pak CLI is in the path- which cloudctl
  6. Make sure your IBM Cloud Pak CLI works - cloudctl help
shell2

Event Streams plugin for IBM Cloud Pak CLI

This plugin will allow us to manage IBM Event Streams.

In order to install it, execute the following commands in your IBM Cloud Shell:

  1. Download the Event Streams plugin for IBM Cloud Pak CLI - curl -L http://ibm.biz/es-cli-linux -o es-plugin
  2. Install it - cloudctl plugin install es-plugin
  3. Make sure it works - cloudctl es help
shell3

Git

IBM Cloud Shell comes with Git already installed out of the box.

Vi

IBM Cloud Shell comes with Vi already installed out of the box.

Python 3

IBM Cloud Shell comes with Python 3 already installed out of the box. However, we need to install the following modules that will be used later on in this tutorial when we run a Python application to work with Avro, Schemas and messages. These modules are confluent_kafka and avro-python3

In order to install these modules, execute the following command in your IBM Cloud Shell:

  1. Install the modules - python3 -mpip install avro-python3 confluent_kafka
shell4

Congrats! you have now your IBM Cloud Shell ready to start working.

Install Event Streams using operators

See the product instructions to deploy using IBM operators. Here is a summary of the steps using the CLI

  • Verify operators are visibles in OpenShift market place

    oc get catalogsource -n openshift-marketplace
    NAME DISPLAY TYPE PUBLISHER AGE
    certified-operators Certified Operators grpc Red Hat 15d
    community-operators Community Operators grpc Red Hat 15d
    ibm-operator-catalog IBM Operator Catalog grpc IBM 6s
    opencloud-operators IBMCS Operators grpc IBM 23h
    redhat-marketplace Red Hat Marketplace grpc Red Hat 15d
    redhat-operators Red Hat Operators grpc Red Hat 15d
  • If the IBM catalogs are not displayed add the following:

    oc apply -f
    https://raw.githubusercontent.com/ibm-cloud-architecture/refarch-eda-tools/master/evenstreams-config/ibm-catalog.yaml
    #
    oc apply -f
    https://raw.githubusercontent.com/ibm-cloud-architecture/refarch-eda-tools/master/evenstreams-config/ibm-cs-catalog.yaml
  • Create a project to host eventstreams cluster:

    oc new-project eventstreams
  • Use Openshift console -> Operators > OperatorHub to search for event streams, then install the operator. (The last test done was on 2.1.0 version). We define the kafka cluster operator to manage instances on any projects, which means the operator will be deployed to openshift-operators, to use channel version 2.1. It takes some time before the operator pod is scheduled.

  • Get your license entitlement key from this web site: https://myibm.ibm.com/products-services/containerlibrary, and save it in a file named key.

  • Create a secret to access the IBM docker registry to get access to the product image:

    oc create secret docker-registry ibm-entitlement-key --docker-username=cp --docker-password=$(cat key) --docker-server="cp.icr.io" -n eventstreams
  • Install an Event Streams instance: Instances of Event Streams can be created after the Event Streams operator is installed. You can use te OpenShift console or our predefined cluster defintion:

    oc apply -f
    https://raw.githubusercontent.com/ibm-cloud-architecture/refarch-eda-tools/master/evenstreams-config/eventstreams-light-insecure.yaml

Log into Event Streams

In this section we are going to see how to log into our IBM Event Streams console both through the UI and CLI.

UI

In order to log into the IBM Event Streams console:

  1. go to your OpenShift console and click on Operators --> Installed Operators in the left hand side bar menu. Then select the project where your IBM Event Streams instance was installed into.

    login1
  2. Click on the IBM Event Streams Operator and then on the Event Streams option listed at the top bar

    login2
  3. Click on the IBM Event Streams instance you want to access to its console and scroll down until you see the Admin UI attribute that displays the route to this IBM Event Streams instance’s console.

    login3
  4. Click on the route link and enter your IBM Event Streams credentials.

CLI

In order to log into IBM Event Streams console through the CLI, we are going to use the oc OpenShift CLI, the clouctl Cloud Pak CLI and the es Cloud Pak CLI plugin. You can check above in this readme how to get these installed. We assume you are already logged into your OpenShift cluster through the oc Openshift CLI (If not, log into your OpenShift cluster console using the user interface and click on Copy Login Command option displayed when you click on your user avatar on the top right corner).

  1. Get your Cloud Pak console route (you may need cluster wide admin permissions to do so as the Cloud Pak is usually installed in the ibm-common-services namespace by the cluster admins)

    $ oc get routes -n ibm-common-services | grep console
    cp-console cp-console.apps.eda-sandbox-delta.gse-ocp.net icp-management-ingress https reencrypt/Redirect None
  2. Log into IBM Event Streams using the Cloud Pak console route from the previous step:

    $ cloudctl login -a https://cp-console.apps.eda-sandbox-delta.gse-ocp.net --skip-ssl-validation
    Username> user50
    Password>
    Authenticating...
    OK
    Targeted account mycluster Account
  3. Initialize the Event Streams CLI plugin (make sure you provide the namespace where your IBM Event Streams instance is installed on as the command will fail if you dont have cluster wide admin permissions)

    $ cloudctl es init -n integration
    IBM Cloud Platform Common Services endpoint: https://cp-console.apps.eda-sandbox-delta.gse-ocp.net
    Namespace: integration
    Name: kafka
    IBM Cloud Pak for Integration UI address: https://integration-navigator-pn-integration.apps.eda-sandbox-delta.gse-ocp.net
    Event Streams API endpoint: https://kafka-ibm-es-admapi-external-integration.apps.eda-sandbox-delta.gse-ocp.net
    Event Streams API status: OK
    Event Streams UI address: https://kafka-ibm-es-ui-integration.apps.eda-sandbox-delta.gse-ocp.net

Create Event Streams Topics

This section is a generic example of the steps to proceed to define a topic with Event Streams on OpenShift. The example is to define a topic named INBOUND with 1 partition and a replica set to 3.

UI

  1. Log into your IBM Event Streams instance through the UI as explained in the previous section.

  2. Click on the Topics option on the navigation bar on the left.

  3. In the topics page, click on the Create topic blue button on the top right corner

    Create Topic
  4. Provide a name for your topic.

    Topic Name
  5. Leave Partitions at 1.

    Partition
  6. Depending on how long you want messages to persist you can change this.

    Message Retention
  7. You can leave Replication Factor at the default 3.

    Replication
  8. Click Create.

  9. Make sure the topic has been created by navigating to the topics section on the IBM Event Streams user inteface you can find an option for in the left hand side menu bar.

CLI

  1. Log into your IBM Event Streams instance using the CLI as explained in the previous section.

  2. Create the topic with the desirec specification.

    cloudctl es topic-create --name INBOUND --partitions 1 --replication-factor 3
  3. Make sure the topic has been created by listing the topics.

    $ cloudctl es topics
    Topic name
    INBOUND
    OK

Get Kafka Bootstrap Url

In this section we are going to see where to find your IBM Event Streams and Kafka bootstrap url/address your applications will need in order to connect to IBM Event Streams to consume messages from and produce messages to.

UI

You can find your IBM Event Streams and Kakfa bootstrap url if you log into the IBM Event Streams user interface, as explained previously in this readme, and click on the Connect to this cluster option displayed on the dashboard. This will display a menu where you will see a url to the left of the Generate SCAM credentials button. Make sure that you are on the External Connection.

Connect to this Cluster bootstrap

CLI

You can find your IBM Event Streams and Kakfa bootstrap url when you init the IBM Event Streams Cloud Pak CLI plugin on the last line:

$ cloudctl es init -n integration
IBM Cloud Platform Common Services endpoint: https://cp-console.apps.eda-sandbox-delta.gse-ocp.net
Namespace: integration
Name: kafka
IBM Cloud Pak for Integration UI address: https://integration-navigator-pn-integration.apps.eda-sandbox-delta.gse-ocp.net
Event Streams API endpoint: https://kafka-ibm-es-admapi-external-integration.apps.eda-sandbox-delta.gse-ocp.net
Event Streams API status: OK
Event Streams UI address: https://kafka-ibm-es-ui-integration.apps.eda-sandbox-delta.gse-ocp.net

Generate SCRAM Service Credentials

For an application to connect to an Event Streams instance through the secured external listener, it needs SCRAM credentials to act as service credentials (as well as the TLS certificate we will talk about in the next section of this readme).

UI

  1. Log into the IBM Event Streams user interface as explained previously in this readme and click on the Connect to this cluster option displayed on the dashboard. This will display a menu. Make sure that you are on the External Connection.

    Connect to this Cluster
  2. Click on the Generate SCRAM credentials button.

    Generate SCRAM1
  3. Introduce a name for your credentials and choose the option that better suits the needs of your applications (this will create RBAC permissions for you credentials so that a service credentials can do only what it needs to do). For this demo, select Produce messages, consume messages and create topics and schemas last option.

    Generate SCRAM 2
  4. Decide whether your service credentials need to have the ability to access all topics or certain topics only. For this demo, select All Topics and then click Next.

    Generate SCRAM 3
  5. Decide whether your service credentials need to have the ability to access all consumer groups or certain specific consumer groups only. For this demo, select All Consumer Groups and click Next.

    Generate SCRAM 4
  6. Decide whether your service credentials need to have the ability to access all transactional IDs or certain specific transactional IDs only. For this demo, select All transaction IDs and click on Generate credentials.

    Generate SCRAM 5
  7. Take note of the set of credentials displayed on screen. You will need to provide your applications with these in order to get authenticated and authorized with your IBM Event Streams instance.

    Generate SCRAM 6
  8. If you did not take note of your SCRAM credentials or you forgot these, the above will create a KafkaUser object in OpenShift that is interpreted by the IBM Event Streams Operator. You can see this KafkaUser if you go to the OpenShift console, click on Operators --> Installed Operators on the right hand side menu, then click on the IBM Event Streams operator and finally click on Kafka Users at the top bar menu.

    Retrieve SCRAM 1
  9. If you click on your Kafka User, you will see what is the Kubernetes secret behind holding your SCRAM credentials details.

    Retrieve SCRAM 2
  10. Click on that secret and you will be able to see again your SCRAM password (your SCRAM username is the same name as the Kafka User created or the secret holding your SCRAM password)

    Retrieve SCRAM 3

CLI

  1. Log into your IBM Event Streams instance through the CLI as already explained before in this readme.

  2. Create your SCRAM service credentials with the following command (adjust the topics, consumer groups, transaction IDs, etc permissions your SCRAM service credentials should have in order to satisfy your application requirements):

    $ cloudctl es kafka-user-create --name test-credentials-cli --consumer --producer --schema-topic-create --all-topics --all-groups --all-txnids --auth-type scram-sha-512
    KafkaUser name Authentication Authorization Username Secret
    test-credentials-cli scram-sha-512 simple EntityOperator has not created corresponding username EntityOperator has not created corresponding secret
    Resource type Name Pattern type Host Operation
    topic * literal * Read
    topic __schema_ prefix * Read
    topic * literal * Write
  3. List the KafkaUser objects to make sure yours has been created:

    $ cloudctl es kafka-users
    KafkaUser name Authentication Authorization
    test-credentials scram-sha-512 simple
    test-credentials-cli scram-sha-512 simple
    OK
  4. To retrieve your credentials execute the following command:

    $ cloudctl es kafka-user test-credentials-cli
    KafkaUser name Authentication Authorization Username Secret
    test-credentials-cli scram-sha-512 simple test-credentials-cli test-credentials-cli
    Resource type Name Pattern type Host Operation
    topic * literal * Read
    topic __schema_ prefix * Read
    topic * literal * Write
    topic * literal * Create
  5. Above you can see your SCRAM username under Username and the secret holding your SCRAM password under Secret. In order to retrieve the password, execute the following command:

    $ oc get secret test-credentials-cli -o jsonpath='{.data.password}' | base64 --decode
    *******

NEXT: For more information about how to connect to your cluste, read the IBM Event Streams product documentation

Get Event Streams TLS Certificates

In this section we are going to see how to download the TLS certificats to securely connect to our IBM Event Streams instance.

UI

  1. Log into the IBM Event Streams console user interface as explained before in this readme.

  2. Click on the Connect to this cluster option displayed on the dashboard. This will display a menu where you will see a Certificates section:

    PKCS12 Truststore
  3. Depending on what language your application is written into, you will need a PKCS12 certificate or a PEM certificate. Click on Download certificate for any of the options you need. If it is the PKCS12 certificate bear in mind it comes with a password for the truststore. You don’t need to write this down as it will display any time you click on Download certificate button.

CLI

  1. Log into IBM Event Streams through the CLI as already explained before in this readme.

  2. To retrieve the PKCS12 certificate execute the following command:

    $ cloudctl es certificates --format p12
    Trustore password is ********
    Certificate successfully written to /Users/testUser/Downloads/es-cert.p12.
    OK
  3. To retrieve the PEM certificate execute the following command:

    $ cloudctl es certificates --format pem
    Certificate successfully written to /Users/testUser/Downloads/es-cert.pem.
    OK

NEXT: For more information about how to connect to your cluste, read the IBM Event Streams product documentation

Run the Starter Application

This section details walking through the generation of a starter application for usage with IBM Event Streams, as documented in the official product documentation.

  1. Log into the IBM Event Streams Dashboard.

    Monitoring1
  2. Click the Try the starter application button from the Getting Started page

    Monitoring2
  3. Click Download JAR from GitHub. This will open a new window to https://github.com/ibm-messaging/kafka-java-vertx-starter/releases

    Monitoring3
    • Click the link for demo-all.jar from the latest release available. At the time of this writing, the latest version was 1.0.0.

      Monitoring4
  4. Return to the Configure & run starter application window and click Generate properties.

    Monitoring5
  5. In dialog that pops up from the right-hand side of the screen, enter the following information:

    • Starter application name: monitoring-lab-[your-initials]
    • Leave New topic selected and enter a Topic name of monitoring-lab-topic-[your-initials].
    • Click Generate and download .zip
    Monitoring6
  6. In a Terminal window, unzip the generated ZIP file from the previous window and move demo-all.jar file into the same folder.

  7. Review the extracted kafka.properties to understand how Event Streams has generated credentials and configuration information for this sample application to connect.

  8. Run the command java -Dproperties_path=./kafka.properties -jar demo-all.jar.

  9. Wait until you see the string Application started in X ms in the output and then visit the application’s user interface via http://localhost:8080.

    Monitoring7
  10. Once in the User Interface, enter a message to be contained for the Kafka record value then click Start producing.

  11. Wait a few moments until the UI updates to show some of the confirmed produced messages and offsets, then click on Start consuming on the right side of the application.

    Monitoring8
  12. In the IBM Event Streams user interface, go to the topic where you send the messages to and make sure messages have actually made it.

    Monitoring9
  13. You can do the following actions on the application

    • If you would like to stop the application from producing, you can click Stop producing.
    • If you would like to stop the application from consuming, you can click Stop consuming.
    • If you would like to stop the application entirely, you can input Control+C in the Terminal session where the application is running.

An alternative sample application can be leveraged from the official documentation to generate higher amounts of load.

Using Kafdrop

Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. It is very helpful for development purpose.

Here are scripts that can be useful to start a local Kafdrop webserver.

source .env
sed 's/KAFKA_USER/'$KAFKA_USER'/g' ./scripts/kafka.properties > ./scripts/output.properties
sed -i '' 's/KAFKA_PASSWORD/'$KAFKA_PASSWORD'/g' ./scripts/output.properties
sed -i '' 's/KAFKA_CERT_PWD/'$KAFKA_CERT_PWD'/g' ./scripts/output.properties
docker run -d --rm -p 9000:9000 \
--name kafdrop \
-v $(pwd)/certs:/home/certs \
-e KAFKA_BROKERCONNECT=$KAFKA_BROKERS \
-e KAFKA_PROPERTIES=$(cat ./scripts/output.properties | base64) \
docker stop $(docker stop kafdrop)

The Web console is at http://localhost:9000/

Running docker in kubernetes pod

If you need to run some of the lab within a shell session with docker, you can create a pod, using OpenShift console under one of your project:

1

In the yaml editor, copy this yaml file. This operation will download the docker image from dockerhub ibmcase account.

You can also download the yaml and do an oc apply -f dind.yaml.

Then remote exec a shell within the pod: oc exec -it dind bash. You should be able to run another docker image in this pod, like for example our python environment.