Javascript For Loops
Javascript For Loops in the ECMA Standard Simple For Loop The simplest type of for loop increments a variable as its iteration method. The variable
In this article we will be looking at what is Apache Kafka , its components and simple steps on setting up Apache Kafka with Zookeeper using Docker.
Apache Kafka is an open-source distributed event streaming platform developed by the Apache Software Foundation. It is designed for building real-time data pipelines and streaming applications. Kafka is known for its high throughput, fault tolerance, durability, and low latency, making it a popular choice for handling large volumes of data in real-time or near-real-time scenarios.
Kafka is widely used in various industries, including finance, e-commerce, social media, and more, for use cases such as log aggregation, real-time monitoring, data warehousing, and event-driven architectures. Its ability to handle real-time data streams efficiently has made it a fundamental component in many modern data processing and analytics pipelines.
Pre-requisite: Install Docker & Docker Compose
First of all you must install docker using one of the following method based on your operating system.
Install Docker Desktop on Mac.
Install Docker Desktop on Windows.
Install Docker on Linux (choose your distro on the lefthand side menu).
After successfully installing Docker, you can follow these steps to install/setup docker compose:
For Mac & Windows, if you have installed Docker Desktop, then Docker Compose is included as part of those desktop installs.
For Linux, follow the steps here (and do all the steps)
Create a folder in your machine to keep the setup releted to the Kafka docker compose file.
Create a new file (docker-compose.yml) in the dir and use the following content in it.
# docker-compose.yml
version: "3.7"
services:
zookeeper:
restart: always
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "zookeeper-volume:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
restart: always
image: docker.io/bitnami/kafka:3.3
ports:
- "9093:9093"
volumes:
- "kafka-volume:/bitnami"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
volumes:
kafka-volume:
zookeeper-volume:
On the above config file we are using docker images of Kafka and zookeper from bitnami.
Apache ZooKeeper is a distributed coordination service often used in conjunction with Apache Kafka to manage and maintain the metadata and configuration information required by Kafka brokers and to ensure the reliability and stability of a Kafka cluster. While Kafka itself handles the distributed storage and processing of data, ZooKeeper plays a critical role in managing the distributed infrastructure and providing coordination services.
Here are some of the key responsibilities of Apache ZooKeeper in an Apache Kafka ecosystem:
Cluster Coordination: ZooKeeper helps in maintaining a reliable and consistent view of the Kafka cluster by electing leaders, coordinating distributed operations, and ensuring that all brokers are aware of the cluster’s state.
Leader Election: Kafka uses ZooKeeper for leader election among broker nodes. Each partition in Kafka has a leader, and ZooKeeper helps in selecting and maintaining the leader, ensuring that one broker is responsible for reading and writing data to a partition at any given time.
Configuration Management: Kafka stores configuration information, such as topic and partition details, in ZooKeeper. This centralizes the configuration management, making it easier to apply changes consistently across the Kafka cluster.
Dynamic Broker Registration: ZooKeeper assists in dynamic broker registration and discovery. When a Kafka broker starts or stops, it registers or deregisters itself with ZooKeeper, allowing other brokers and clients to discover the current set of active brokers.
Health Monitoring: ZooKeeper provides a way to monitor the health of Kafka brokers. By regularly checking the status of ZooKeeper nodes, administrators can detect broker failures and take appropriate actions, such as reassigning partitions.
Synchronization and Locking: ZooKeeper offers synchronization primitives like locks and semaphores, which Kafka can use to coordinate activities among brokers and clients. These primitives are helpful in scenarios where distributed coordination is required.
Metadata Storage: Kafka stores critical metadata, such as topic and partition information, broker configurations, and access control lists (ACLs), in ZooKeeper. This metadata is essential for the proper functioning of the Kafka cluster.
It’s important to note that starting with Apache Kafka version 2.8.0 (as of my last knowledge update in September 2021), Kafka is working toward reducing its dependency on ZooKeeper through a feature called KRaft mode. In KRaft mode, Kafka aims to replace ZooKeeper with an internal metadata quorum, making Kafka clusters more self-contained and simplifying their operational complexity. However, the adoption of KRaft mode may depend on the specific version of Kafka you are using, so it’s essential to refer to the official Kafka documentation and release notes for the most up-to-date information regarding ZooKeeper’s role in your Kafka deployment.
After the docker compose file is created, you can start your Kafka and zookeper servises using the following command
docker-compose up -d
Now you have successfully setup and run a container of Kafka.
In our next step we will see, how we can broadcast some sample message to an topic and recive it using a simple nestjs (nodejs) application.
Refer to the Kafka Documentation to read and understand on Kafka Architecture
Javascript For Loops in the ECMA Standard Simple For Loop The simplest type of for loop increments a variable as its iteration method. The variable
Retriving a List of attribute (field) values from a Object List (Array List) List users=new ArrayList<>(); List userIds=users .stream() .map(u->u.getId()) .collect(Collectors.toList()); Filter Objects by Attribute
let message = ‘helloWorld’ let convertedMessage=message.replace(/[A-Z]/g, letter => `_${letter.toLowerCase()}`); console.log(convertedMessage); output -> hello_world How to Convert Camel Case to Snake Case JavaScript
Things to know about the latest Kali release. 1. New Tools Here are all the new tools that come with the release. BruteShark – Network Analysis
In bubble sort ,an array is traversed from first element to last element. Then, current element is compared with the next element. If current element
Using yarn: yarn global add pm2 Using npm npm install pm2 -g Using debian apt update && apt install sudo curl && curl -sL
Login to the database as root user mysql -u root -p Once you have logged in as root user, run the following command to list
This release includes 135 enhancements, documentation improvements, dependency upgrades, and bug fixes. Now, without requiring any particular settings, you can convert your Spring Boot applications
A popular public-key cryptosystem for secure data transfer is RSA (Rivest-Shamir-Adleman). In addition, it is among the oldest. The surnames of Ron Rivest, Adi Shamir,
American Standard Code for Information Interchange is referred to as ASCII. An ASCII code is the numerical representation of a character, such as “a” or