Next steps¶
You can run on your own OpenShift cluster with existing assets and more assets available on the public Git repositories.
EDA Community Call¶
IBM - Internal: Register in Your Learning for a community call every Wednesday at 7:00am PST (Webex boyerje). We will organize the calls in different scope:
- Week 1 of every month: Beginner sessions for Kafka
- Week 2 of every month: Bring you own opportunity so we can share trick on how to make it progresses
- Week 3 of every month: Deeper dive: asset presentation, architecture, coding discussions
- Week 4 of event month: Project success story, opportunity success story, product roadmap update.
Internal site¶
IBM internal site for sharing knowledge on use cases / workshops / ...
Kafka Connector World¶
The Event Streams demonstration introduced the Kafka Connect framework,
The real time inventory solution uses MQ source connector, with the Kafka connector cluster defined in this kafka-connect.yaml file as:
apiVersion: eventstreams.ibm.com/v1beta2
kind: KafkaConnect
metadata:
name: std-1-connect-cluster
annotations:
eventstreams.ibm.com/use-connector-resources: "true"
spec:
version: 3.0.0
replicas: 2
bootstrapServers: es-demo-kafka-bootstrap.cp4i-eventstreams.svc:9093
image: quay.io/ibmcase/eda-kconnect-cluster-image:latest
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
template:
pod:
imagePullSecrets: []
metadata:
annotations:
productChargedContainers: std-1-connect-cluster-connect
eventstreams.production.type: CloudPakForIntegrationNonProduction
productID: 2a79e49111f44ec3acd89608e56138f5
productName: IBM Event Streams for Non Production
productVersion: 11.0.0
productMetric: VIRTUAL_PROCESSOR_CORE
cloudpakId: c8b82d189e7545f0892db9ef2731b90d
cloudpakName: IBM Cloud Pak for Integration
cloudpakVersion: 2022.1.1
productCloudpakRatio: "2:1"
config:
group.id: std-1-connect-cluster
offset.storage.topic: std-1-connect-cluster-offsets
config.storage.topic: std-1-connect-cluster-configs
status.storage.topic: std-1-connect-cluster-status
config.storage.replication.factor: 3
offset.storage.replication.factor: 3
status.storage.replication.factor: 3
tls:
trustedCertificates:
- secretName: es-demo-cluster-ca-cert
certificate: ca.crt
authentication:
type: tls
certificateAndKey:
certificate: user.crt
key: user.key
secretName: std-1-tls-user
And the source definition kafka-mq-src-connector.yaml
you may need to go deeper with labs and best practices:
- a technical summary
- MQ connector lab
- Deploy cloud object storage sink connector lab
- Deploy a S3 sink connector using Apache Camel
- Mirror maker 2.0 as a Kafka Framework solution
- Code source of the MQ source connector
- Code source of the Rabbit MQ connector and the matching lab
- Code source of the JDBC sink connector
Reactive Messaging Programming¶
Event-driven microservices adopt the reactive manifesto, which means use messaging as a way to communicate between components. When the components are distributed, Kafka or MQ are used as broker.
Microprofile Reactive Messaging is a very elegant and easier way to integrate with Kafka / Event Streams. The best support for it, is in Quarkus and this reactive with kafka guide is a first read. The Microprofile Reactive Messaging 1.0 is supported in OpenLiberty with Microprofile 3.0.
The code template in the EDA quickstart repository includes reactive messaging code template.
The SAGA implementation¶
Long running process between microservice is addressed by the adoption of the SAGA pattern. You can read about the pattern in this note
And visit the Choreography implementation done with, Event Streams, Reactive Programming here
- Order microservice keeping SAGA coherence - git repository
- Reefer microservice participant to the SAGA - git repo
- Voyage microservice SAGA participant - git repo
The orchestration implementation with, Event Streams, Reactive Programming here
Change data capture with Debezium and Outbox pattern¶
Very interesting lab with Debezium using outbox pattern.
Full GitOps story¶
To get a better understanding of the EDA gitops process see this technical note and reuse the following git repostiories:
Instana monitoring¶
Deploying Instana APM on the Event Streams Cluster running on Openshift/kubernetes.
- Create the instana-agent project and set the policy permissions to ensure the instana-agent service account is in the privileged security context.
oc login -u system:admin
oc new-project instana-agent
oc adm policy add-scc-to-user privileged -z instana-agent
- Login to Instana console: https://training-kafka.instana.io
- Click on Deploy agent on the Top right corner
Select OpenShift and enter the desired ClusterName/ClusterID that needs to be monitored with Instana. You find the clusterID from Openshift console.
Download the yaml file generated by Instana
Navigate toOpenshift -> Workloads -> DeamonSets
and import the instana-agent.yaml and create the deamonset per each of the document streams inside the yaml separately.
For example, since we already created the name space and serviceaccount in Step1, we can skip the first two documents below and start from Kind:secret.
Once DeamonSet has been successfully created, validate that the pods are running suceessfully, wait for 5 minutes for instana to automatically discover the kafka components and instrument them.