In my earlier story, we realized concerning the Fundamentals of Apache Kafka, why we should always use and its advantages.
On this half, I’ll educate you how one can use Apache Kafka even if you’re a newbie and don’t have any prior data of Apache Kafka.
Let’s take a use case the place we are able to use Apache Kafka and How we are able to use it.
Downside: Suppose we have now a Shopper Software that collects some Analytics out of your App and sends it to the Server with an everyday Interval.
As this information is essential we should always not lose any information after we are sending it to the analytics server and It server can devour all of the requests coming from completely different servers.
Reply: One factor we are able to perceive is we want a Dependable and Scalable system as we cannot afford to lose any information and our system can devour a whole bunch of requests.
Let’s Implement with out Apache Kafka :
Pattern code to ship information to the server through the use of HttpURLConnection in Java :
public void sendLog(String url, Request request){
URL serverUrl = new URL(url);
HttpURLConnection conn = (HttpURLConnection) serverUrl.openConnection();
conn.setRequestMethod("POST");
conn.setDoOutput(true);
conn.setRequestProperty("Content material-Sort", "software/json");String jsonPayload = request.toString();
System.out.println("Response from server:" + jsonPayload);
DataOutputStream outputStream = new DataOutputStream(conn.getOutputStream());
outputStream.writeBytes(jsonPayload);
outputStream.flush();
outputStream.shut();
}
All the pieces seems to be good to this point however after we discuss Reliability we cannot belief HTTP Connection because it’s Stateless and when A whole lot of shoppers will ship 1000’s of requests our Server will be unable to deal with them till we do Scaling.
What if we have now a number of sources to ship several types of Logs, how we deal with these completely different logs server-side e.g
Let’s repair this downside with Apache Kafka :
Information Sources will publish their information to Apache Kafka, After that Kafka will distribute the information stream to the specified vacation spot, however how?
Right here subject will assist us ,
Kafka subjects manage associated occasions. For instance, we might have a subject referred to as logs, which accommodates logs from an software. Subjects are roughly analogous to SQL tables. Nevertheless, not like SQL tables, Kafka subjects aren’t queryable. As an alternative, we should create Kafka producers and shoppers to make the most of the information. The information within the subjects are saved within the key-value type in binary format.
In our case, I’ve created a Matter Analytics in our code however we are able to have a number of subjects like Buy for storing Buying data, Electronic mail and so on
As you may see in Diagram Shopper will create a Producer that can ship Information to Apache Kafka, Kafka will assign this information primarily based upon the Matter and ship it to the Server the place we can have a shopper that can devour information primarily based on the Matter.
Let’s setup Kafka Producer in Eclipse :
We solely want few dependencies so as to add in pom.xml
ch.qos.logback
logback-classic
1.2.6
org.apache.kafka
kafka-clients
3.1.0
org.slf4j
slf4j-log4j12
org.slf4j
slf4j-api
1.7.32
log4j
log4j
1.2.17
After Including dependencies in our code we might be good to go.
Kafka Configuration :
public interface KafkaConfiguration {String TOPIC_NAME = "analytics";
String SERVER_URL = "localhost:9092";
String KEY = "org.apache.kafka.frequent.serialization.StringSerializer";
String VALUE = "org.apache.kafka.frequent.serialization.StringSerializer";
}
Create Kafka Properties:
Properties props = new Properties();
props.put("bootstrap.servers", SERVER_URL);
props.put("key.serializer", KEY);
props.put("worth.serializer", VALUE);
Create Producer and Ship Information :
non-public void sendMessage(Request request) {KafkaProducer producer = new KafkaProducer(props.getProperties());
ProducerRecord pr = new ProducerRecord<>(KafkaConfiguration.TOPIC_NAME, "key", request.toString());
producer.ship(pr, (metadata, exception) -> {
if (exception != null) {
System.err.println("Error sending message: " + exception.getMessage());
} else {
System.out.println("Message despatched efficiently: subject=" + metadata.subject() + ", partition="
+ metadata.partition() + ", offset=" + metadata.offset());
}
});
producer.shut();
}
That is how we’ll create a Producer and ship information to Kafka Shopper, within the subsequent submit I’ll let you understand how to setup Kafka, nodejs, MongoDB, create a Matter, obtain information from the patron, and save this information to Database .
Please observe and like for extra superior tutorials.