A beginners guide to Spring Boot + Apache Kafka
Apache Kafka – We are living in the age of the data revolution. Have you ever thought of how the huge amount of real-time data is being processed? How does an eventing system work?
Okay, cool…it’s Apache Kafka. Don’t worry, you have come to the right place. In this Kafka tutorial, we will see what is Kafka and how to develop a spring boot application with Apache Kafka.
What is Kafka?
So Kafka can be defined as a distributed publish-subscribe messaging system which guarantees speed, scalability, and durability. It can be said that the Kafka is a system which can handle higher levels of performance just because of its unique design. So it’s basically a messaging system.
Before developing our spring boot application, we will just go through the basic terminologies in Apache Kafka.
Producer: It’s simply the application that sends the message. The message could be anything. But when it comes to the terms of Kafka, It is just an array of data.
Consumer: An application that receives the messages sent by the producer. The consumer doesn’t directly consume the messages from the producer. Instead, the producer would just send the messages to a Kafka server and consumer will consume messages from there.
Kafka Broker: It is just a name for Kafka server. The producer and consumer use Kafka broker as an agent to send and receive the messages.
Cluster: As we all know a cluster is a group of something. Here it is a group of computers, each executing one instance of Kafka broker.
Topic: As the producer is sending many messages to a Kafka server, it will be very difficult for each consumer to know which all messages they should consume. So the topic is a name for a data stream. Each consumer will listen to a particular topic and whenever there is data in that topic, the consumer will receive it.
Partition: The data that we are dealing with, can be very large. In this case, the data in each topic may be broken into partitions and they are distributed to different brokers across the clusters.
Offset: It is a sequence number assigned to each message arriving into the Kafka server. Offsets are not globally assigned, rather it is assigned locally in partitions.
Consumer groups: A group of consumers acting as a single unit. We can use partitioning and consumer groups as tools for scaling the application.
Okay, enough theory. Now we can jump into our spring boot + Kafka application. Come, let’s gets our hands dirty.
First, we need to create a producer application. Go to start.spring.io and create a new spring boot project with Kafka dependency.
Create a controller package and write an API for publishing the messages. Now, create a service package for sending messages to a topic. Alright!! We have now created the topic and are ready to send the message. Before that, we have to write some configuration regarding the serialization of messages. So, create a config package and follow the code below to implement it. Here bootstrap server is our local machine and we are hosting it on port 9092. Here we also define our serialization for the key (topic)and value (message).
This article originally appeared on medium.com To read the full article and see the images, click here.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering tools for Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT, IBM Cloud Pak for Integration and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics