Getting started with Confluent Kafka with OpenShift
In this scenario, we’re going to do a development deployment of Confluent platform using the Confluent for Kubernetes Operator. We are using TLS encryption between each components, and configuring different listeners for authentication, and expose the Kafka bootstrap server with OpenShift routes.
To validate the deployment we will use two simple applications to produce and consume order messages.
Confluent has worked on the development of new operator. The current product documentation can be found in Confluent for kubernetes. Some examples of cluster configuration are in the confluent-kubernetes-examples github repo.
Helm 3 on OCP deployed
Cluster administrator access to OCP Cluster
Be sure clock synchronization is setup using Network Time Protocol between worker node (
ps -ef | grep ntpd).
You can clone the eda-lab-inventory repositorygit clone https://github.com/ibm-cloud-architecture/eda-lab-inventory
For production deployment you can use the eventsizer website to assess the cluster sizing for VM, bare metal or Kubernetes deployment. The Confluent System requirements is also giving guidelines for the different Confluent components in term of physical resources and supported OS.
The important considerations for the broker are the disk, RAM and CPUs. Dedicating one to one k8s worker node to Kafka broker node is a safe decision.
Log in to the OpenShift cluster via the CLI first:
oc login ...
Create a new project/namespace to encapsulate the Confluent for Kubernetes resources.oc new-project confluent
If you change this project name then you need to update the yaml file to define the plaform (See this example file).
Add the necessary Confluent for Kubernetes Helm repository artifacts and update.helm repo add confluentinc https://packages.confluent.io/helmhelm repo update
Install the Operator.helm upgrade --install confluent-operator confluentinc/confluent-for-kubernetes
You can wait for the status with the following commandoc wait deployment confluent-operator --for=condition=Available
Authorize service account to get privileged security context
Note - You should see an error with insufficient security context and that your deployment cannot create a pod. The reason for this is because the associated service account does not have the sufficient permissions to create the pod. Run the following command to get a list of the Service Accounts in the current namespace.oc get saNAME SECRETS AGEbuilder 2 4m11sconfluent-for-kubernetes 2 3m4sdefault 2 4m11sdeployer 2 4m11spipeline 2 4m11s
As you can see there’s the
confluent-for-kubernetesservice account that was automatically created. Run the following command to give that service account the sufficient constraints.oc adm policy add-scc-to-user privileged -z confluent-for-kubernetes
Note - Instead of adding the
privilegedsecurity context constraint to the
confluent-for-kubernetesservice account, you may fancy to change the UID (1001) in the Confluent for Kubernetes operator
values.yamlfiles instead and applying those custom helm charts.
You might need to start a new rollout manually.oc rollout latest deployment confluent-operator
We’re going to be exposing the Kafka service using OpenShift routes which requires TLS configurations for Confluent Platform.
First of all, we will need to update the
defaultService Account so that the resources can be brought up.oc adm policy add-scc-to-user privileged -z default
Note Similar to deploying the Confluent for Kubernetes operator, instead of adding the
privilegedsecurity context constraint to the
defaultservice account, you may fancy to change the UID (1001) required in the helm chart yaml files instead and applying those.
- In the root of one of our lab repository is a folder named
environment/confluent/certs/. We have defined some sample configurations to use to generate our Certificate Authority (CA) keys.
IMPORTANT - If using another namespace name, the
certs/server-domain.jsonfile may need to be modified.
You can replace the
confluent value with the project/namespace that you have your Confluent for Kubernetes resources located in.
Below is an example of
We will need the cfssl CLI tool to sign, verify and bundle TLS certificates. On MacOS you can use brew to install it.brew install cfssl
We’ll create a new folder
certs/generated/to keep the CA generated files.# under environments/confluentmkdir ./certs/generatedcfssl gencert -initca ./certs/ca-csr.json | cfssljson -bare ./certs/generated/ca -
this should creates the following files:├── generated│ ├── ca-key.pem│ ├── ca.csr│ └── ca.pem
We can validate the CA file with the following.openssl x509 -in ./certs/generated/ca.pem -text -noout
Now to create server certificates with the appropriate SANs (SANs listed in server-domain.json) we do:cfssl gencert -ca=./certs/generated/ca.pem \-ca-key=./certs/generated/ca-key.pem \-config=./certs/ca-config.json \-profile=server ./certs/server-domain.json | cfssljson -bare ./certs/generated/server
This should add:├── generated| ├── server-key.pem| ├── server.csr| └── server.pem
Again, we can validate the server certificate and SANsopenssl x509 -in ./certs/generated/server.pem -text -noout
We’re going to create eight OpenShift secrets which will all use the same CA files for the sake of simplicity. In production we may is differnet certificate for each component. They will all be named differently as the TLS configurations within the various Custom Resources expect different TLS secrets.oc create secret generic generic-tls \--from-file=fullchain.pem=./certs/generated/server.pem \--from-file=cacerts.pem=./certs/generated/ca.pem \--from-file=privkey.pem=./certs/generated/server-key.pem &&oc create secret generic kafka-tls-internal \--from-file=fullchain.pem=./certs/generated/server.pem \--from-file=cacerts.pem=./certs/generated/ca.pem \--from-file=privkey.pem=./certs/generated/server-key.pem &&oc create secret generic kafka-tls-external \
We need to create another secret for the internal Kafka listener so that the other Confluent Platform resources can connect to the Kafka cluster over PLAINTEXT.oc create secret generic internal-plain-credential \--from-file=plain-users.json=./certs/creds-kafka-sasl-users.json \--from-file=plain.txt=./certs/creds-client-kafka-sasl-user.txt
Get the Openshift ingress subdomain name:
and update the
environments/confluent/platform.yamlfile to reflect this domain for the route elements:apiVersion: platform.confluent.io/v1beta1kind: Kafkaspec:listeners:external:externalAccess:type: routeroute:domain: <tochange>.....
Apply the minimum custom resources to get 3 zookeepers, 3 Kafka Broker, 1 Kafka connector, ksqlDB, ControlCenter and schema registry:oc apply -f environments/confluent/platform.yaml
Note - If you created a namespace with a name other than
confluentyou will need to create a local yaml file and you can either remove
metadata.namespace: confluentin each of the Custom Resource YAMLs and apply that file in your created namespace or edit
metadata.namespace:value to your created one. You can also customize the settings in these YAMLs as you see fit. In the eda-lab-inventory repository you will find such a file.
Now wait a few minutes for all the resources to come up.oc get pods -wNAME READY STATUS RESTARTS AGEconfluent-operator-5b4fb58d99-vbnsn 1/1 Running 0 29mconnect-0 1/1 Running 2 5m44scontrolcenter-0 0/1 Pending 0 108skafka-0 1/1 Running 0 4m9skafka-1 1/1 Running 0 4m9skafka-2 1/1 Running 0 4m9s
Once everything is ready you can test by port-forwarding to
controlcenter-0pod.oc port-forward controlcenter-0 9021:9021
Forwarding from 127.0.0.1:9021 -> 9021 Forwarding from [::1]:9021 -> 9021 Handling connection for 9021
In your browser go to
KafkaTopic custom resource named
cat << EOF | oc apply -f -apiVersion: platform.confluent.io/v1beta1kind: KafkaTopicmetadata:name: ordersnamespace: confluentspec:replicas: 3partitionCount: 1
Once everything is up and running you can verify the Control center using a port forward like before.
oc port-forward control-center-0 9021:9021
- Go to
httpsis important unlike previously as it’s now secured.
You can also view the Control Center UI from the enabled route which will be in the form of something like the following