Posts

Showing posts with the label Apache Kafka

Kafka useful commands on Macbook or linux

Start up the Zookeeper. ./zookeeper-server-start.sh ../config/zookeeper.properties Add the below properties in the server.properties listeners=PLAINTEXT://localhost:9092auto.create.topics.enable=false Start up the Kafka Broker ./kafka-server-start.sh ../config/server.properties How to create a topic ? ./kafka-topics.sh –create –topic test-topic –bootstrap-server localhost:9092 –replication-factor 1 –partitions 4 Output: Created topic test-topic. Describe Topic command ./kafka-topics.sh –describe –topic test-topic –bootstrap-server localhost:9092 output: Topic: test-topic TopicId: O-uBj0D_R6aMhKMsTUgqhg PartitionCount: 4 ReplicationFactor: 1 Configs: segment.bytes=1073741824 Topic: test-topic. Partition: 0   Leader: 0. Replicas: 0. Isr:   0 Topic: test-topic. Partition: 1. Leader: 0. Replicas: 0. Isr:   0 Topic: test-topic. Partition: 2. Leader: 0. Replicas: 0. Isr:   0 Topic: test-topic. Partition: 3. Leader: 0. Replicas: 0. Isr:   0 How to instantiate a Console Producer? Without Key

Build an Apache Kafka Producer application using callbacks

Image
use case: You have an application using a Apache KafkaProducer, but you want to have an automatic way of handling the responses after producing records. In this tutorial you learn how to use the Callback interface to automatically handle responses from producing records. Short Answer Overload the  KafkaProducer.send  method with an instance of the Callback interface as the second parameter. producer.send(producerRecord, (recordMetadata, exception) -> { if (exception == null ) { System.out.println( "Record written to offset " + recordMetadata.offset() + " timestamp " + recordMetadata.timestamp()); } else { System.err.println( "An error occurred" ); exception.printStackTrace(System.err); }}); Steps: 1 Initialize the project To get started, make a new directory anywhere you’d like for this project: mkdir kafka-producer-application-callback && cd kafka-producer-applica

Build Apache Kafka Consumer application

Image
use case: You’d like to integrate an Apache KafkaConsumer in your event-driven application, but you’re not sure where to start. In this tutorial you’ll build a small application reading records from Kafka with a Kafka Consumer. You can use the code in this tutorial as an example of how to use an Apache Kafka consumer. Steps: 1 Initialize the project To get started, make a new directory anywhere you’d like for this project: mkdir kafka-consumer-application && cd kafka-consumer-application 2 Get Confluent Platform Next, create the following  docker-compose.yml  file to obtain Confluent Platform: ---version: '2' services: zookeeper: image: confluentinc/cp-zookeeper: 6.1 .0 hostname: zookeeper container_name: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-kafka: 6.1 .0 hostname: broker container_name: broker depends_on: - zookeepe

Build Apache Kafka Producer application

use case: You’d like to integrate an Apache KafkaProducer in your event-driven application, but you’re not sure where to start. In this tutorial you’ll build a small application writing records to Kafka with a KafkaProducer. You can use the code in this tutorial as an example of how to use an Apache Kafka producer Steps : 1. Initialize the project To get started, make a new directory anywhere you’d like for this project: mkdir kafka-producer-applicationcd kafka-producer-application 2. Get Confluent Platform Next, create the following  docker-compose.yml  file to obtain Confluent Platform: ---version: '2' services: zookeeper: image: confluentinc/cp-zookeeper: 6.1 .0 hostname: zookeeper container_name: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-kafka: 6.1 .0 hostname: broker container_name: broker depends_on: - zookeeper ports: