hadoop kafka install multi-borker (7)
multi-borker function like cluster technology
First we make a config file for each of the brokers (on Windows use the copy command instead
> cp config/server.properties config/server-1.properties
> cp config/server.properties config/server-2.properties
Now edit these new files and set the following properties:
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dirs=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dirs=/tmp/kafka-logs-2
The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
> bin/kafka-server-start.sh config/server-1.properties &
...
> bin/kafka-server-start.sh config/server-2.properties &
...
Now create a new topic with a replication factor of three:
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.
"leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
"replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
"isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
Note that in my example node 1 is the leader for the only partition of the topic.
We can run the same command on the original topic we created to see where it is:
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
So there is no surprise there—the original topic has no replicas and is on server 0, the only server in our cluster when we created it.
Let's publish a few messages to our new topic:
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
...
my test message 1
my test message 2
^C
Now let's consume these messages:
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C
Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:
> ps aux | grep server-1.properties
7564 ttys002 0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
> kill -9 7564
Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 1,2,0 Isr: 2,0
But the messages are still available for consumption even though the leader that took the writes originally is down:
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C
hadoop kafka install multi-borker (7)的更多相关文章
- hadoop kafka install (6)
reference: http://kafka.apache.org/quickstart http://dblab.xmu.edu.cn/blog/1096-2/ hadoop@iZuf68496 ...
- hadoop mysql install (5)
reference : http://dblab.xmu.edu.cn/blog/install-mysql/ http://wiki.ubuntu.org.cn/MySQL #install mys ...
- hadoop redis install (4)
reference: http://dblab.xmu.edu.cn/blog/131/ https://github.com/dmajkic/redis https://blog.csdn ...
- 大牛博客!Spark / Hadoop / Kafka / HBase / Storm
在这里,非常感谢下面的著名大牛们,一路的帮助和学习,给予了我很大的动力! 有了Hadoop,再次有了Spark,一次又一次,一晚又一晚的努力相伴! HBase简介(很好的梳理资料) 1. 博客主页:h ...
- hadoop kafka import/export data (8)
reference: http://kafka.apache.org/quickstart need to solve issue ISSUE 1: [2019-01-29 15:59:39,272] ...
- hadoop hive install (5)
reference : http://dblab.xmu.edu.cn/blog/install-hive/ http://dblab.xmu.edu.cn/blog/hive-in-practice ...
- hadoop mongodb install(3)
reference:http://dblab.xmu.edu.cn/blog/868-2/ root@iZuf68496ttdogcxs22w6sZ:~# mv mongodb-linux-x86_6 ...
- hadoop hbase install (2)
reference: http://dblab.xmu.edu.cn/blog/install-hbase/ reference: http://dblab.xmu.edu.cn/blog/2139- ...
- hadoop+yarn+hbase+storm+kafka+spark+zookeeper)高可用集群详细配置
配置 hadoop+yarn+hbase+storm+kafka+spark+zookeeper 高可用集群,同时安装相关组建:JDK,MySQL,Hive,Flume 文章目录 环境介绍 节点介绍 ...
随机推荐
- 20145324王嘉澜《网络对抗技术》MSF基础应用
实践目标 •掌握metasploit的基本应用方式 •掌握常用的三种攻击方式的思路. 实验要求 •一个主动攻击,如ms08_067 •一个针对浏览器的攻击,如ms11_050 •一个针对客户端的攻击, ...
- 20145329 《网络对抗技术》Web基础
实践目标 Web前端HTML 能正常安装.启停Apache.理解HTML,理解表单,理解GET与POST方法,编写一个含有表单的HTML. Web前端javascipt 理解JavaScript的基本 ...
- bzoj 2118 墨墨的等式 - 图论最短路建模
墨墨突然对等式很感兴趣,他正在研究a1x1+a2y2+…+anxn=B存在非负整数解的条件,他要求你编写一个程序,给定N.{an}.以及B的取值范围,求出有多少B可以使等式存在非负整数解. Input ...
- C++ 细小知识点
1. C++ 拷贝构造函数参数为const类型 原因:因为复制构造函数是用引用方式传递复制对象,引用方式传递的是地址,因此在构造函数内对该引用的修改会影响源对象,防止源对象被修改,就要把参数类型设为c ...
- 硬盘分区表知识—详解硬盘MBR
硬盘是现在计算机上最常用的存储器之一.我们都知道,计算机之所以神奇,是因为它具有高速分析处理数据的能力.而这些数据都以文件的形式存储在硬盘 里.不过,计算机可不像人那么聪明.在读取相应的文件时,你必须 ...
- PHP Extension
新手搞PHP ,之前用过 PERL, BASH: 所以开始用PHP 写程序上手比较快, 几天之后对PHP 的内部实现机制产生了兴趣,所以自己尝试着写写简单的PHP 扩展,以增加对PHP 的理解. ...
- 【传输对象】kafka传递实体类消息
工具类 负责对象字节数组的相互转换,传输数据用 package com.yq.utils; import java.io.ByteArrayInputStream; import java.io.By ...
- BZOJ1355: [Baltic2009]Radio Transmission KMP
Description 给你一个字符串,它是由某个字符串不断自我连接形成的. 但是这个字符串是不确定的,现在只想知道它的最短长度是多少. Input 第一行给出字符串的长度,1 < L ≤ 1, ...
- [SpringMVC] - 简单说明什么是SpringMVC
M 代表 模型(Model)V 代表 视图(View) C 代表 控制器(controller) 模型是什么呢? 模型就是数据,就是dao,bean 视图是什么呢? 就是网页, JSP,用来展示模型中 ...
- Unity3D学习笔记(二十一):InputFiled、Dropdown、Scroll Rect、Mask
InputFiled组件(输入框) Text Component(显示内容):显示输入内容的Text的组件 Text(输入内容):输入的文本内容 Character Limit:字符数量限值,0是无限 ...