Kafka 05: Kafka Consumer with ElasticSearch


Pre Configure 1. Register Bonsai: 2. Increase maximum limit: (We will demostrate batch process, so 1000 is really easy to fulfil) [crayon-5e8e49d6b577a478910418/] 3. Install Maven dependency [crayon-5e8e49d6b5784264258609/]     The Real Code 1. Setup ElasticSearch Client [crayon-5e8e49d6b5788917923028/] 2. Create a Kafka Consumer [crayon-5e8e49d6b578a042432537/] 3. Post records from Kafka into ElasticSearch (Bonsai) [crayon-5e8e49d6b578d960647781/]   Consumer Offset Commit Strategy Things about at most once and at least once At most once: offsets are commited as soon as the message is received. If the processing goes wrong, the message will be lost and it [...]

Kafka 05: Kafka Consumer with ElasticSearch2019-06-20T23:45:19+10:00

PHP xDebug with Docker


Single PC with Remote Debug   Step 1: Install xDebug in remote web server.  Step 2: Modify xdebug.ini (or php.ini) [crayon-5e8e49d6b5ccf805415491/] Step 3: Setup PHPStorm setup File -> Settings -> Languages & Frameworks -> PHP -> CLI Interpreter -> ... -> + from Docker (one project should be Frontend, one project is Backend, they are separate.) setup File -> Settings -> Languages & Frameworks -> PHP -> Debug. You need to change the Debug port to 9001 ; (For Backend project, set it to 9002) setup File -> Settings -> Languages & Frameworks -> PHP -> Servers.  Specify a [...]

PHP xDebug with Docker2019-06-18T23:16:10+10:00

Kafka 04: Producer Configurations


Overview The overview of our TwitterProducer config. [crayon-5e8e49d6b638d572067323/]     About acks [crayon-5e8e49d6b6397934301167/] acks = 0 no response is required if the broker goes offline, we will lose data useful for data where it's ok to lose: metrics, log collection. acks = 1 leader response is requested. no replication is required. the producer may retry, if ack from leader is not received if the leader goes offline, we will lose data acks = all leader + replicas response is required added latency and safety no data loss if enough replicas     [...]

Kafka 04: Producer Configurations2019-06-12T00:17:00+10:00

Kafka 03: Twitter Producer (Java)


Setup Twitter Developer Account and Create an App link: You need to give good reason and detail your app description.     Get Dependencies link: [crayon-5e8e49d6b665e067957210/] Create New Producer and Consumer Step 1: Overview [crayon-5e8e49d6b6666755606858/] Step 2: Create a Twitter Client [crayon-5e8e49d6b6669168261268/] Step 3: Create a Kafka Producer [crayon-5e8e49d6b666b722798285/] Step 4: Create a Topic [crayon-5e8e49d6b666e823365683/] Step 5: Launch Kafka Console Consumer [crayon-5e8e49d6b6675577497095/] Full Code [crayon-5e8e49d6b667b907144807/]

Kafka 03: Twitter Producer (Java)2019-06-12T00:10:54+10:00

Kafka 02: Install and CLI


Install Download and unzip unzip and unzip paste under C:\ Create a folder data under kafka's root dir. Create two folders: kafka and zookeeper under data folder. Change properties: config\ dataDir=C:/kafka_2.12-2.2.0/data/zookeeper config\ log.dirs=C:/kafka_2.12-2.2.0/data/kafka   Launch Kafka Add environment PATH: C:\kafka_2.12-2.2.0\bin\windows Launch Zookeeper FIRST, then start Kafka: zookeeper-server-start.bat  config\ kafka-server-start.bat  config\ kafka-topics 1. Create C:\kafka_2.12-2.2.0>kafka-topics --zookeeper --topic first_topic --create --partitions 3 --replication-factor 1 when we see list of command with description, means something is wrong. when create a topic, we need to specify how many partitions [...]

Kafka 02: Install and CLI2019-06-12T00:10:38+10:00

Kafka 01: The Theory


Topics, Partitions and Offsets Topics: A particular stream of data. Similar to a table in a database. You can have as many topics as you want. It is identified by name Partitions: It splits a topic. Each partition is ordered. When you create a topic, you have to define how many partitions you want. Offset: Within a partition, each message gets an auto incremental id. It's infinite. Offset only have a meaning for a specific partition, which means, offset1 in partition1, only have particular meaning for partition1, not partition2. The offsets' order would only be guaranteed within one partition. Data [...]

Kafka 01: The Theory2019-06-12T00:10:44+10:00