不多说,直接上干货!

  一切来源于官网

http://kafka.apache.org/documentation/

Step 6: Setting up a multi-broker cluster

Step : 设置多个broker集群

  So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).

到目前,我们只是单一的运行一个broker,,没什么意思。对于Kafka,一个broker仅仅只是一个集群的大小, 所有让我们多设几个broker.

  First we make a config file for each of the brokers (on Windows use the copy command instead):

首先为每个broker创建一个配置文件:
> cp config/server.properties   config/server-1.properties
> cp config/server.properties config/server-2.properties

  Now edit these new files and set the following properties:

现在编辑这些新建的文件,设置以下属性:
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2

  The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.

broker.id是集群中每个节点的唯一且永久的名称,我们修改端口和日志分区是因为我们现在在同一台机器上运行,我们要防止broker在同一端口上注册和覆盖对方的数据。

  We already have Zookeeper and our single node started, so we just need to start the two new nodes:

我们已经运行了zookeeper和刚才的一个kafka节点,所有我们只需要再启动2个新的kafka节点。
> bin/kafka-server-start.sh config/server-1.properties &
...
> bin/kafka-server-start.sh config/server-2.properties &
...

  Now create a new topic with a replication factor of three:

现在,我们创建一个新topic,把备份设置为:
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic


  Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:

好了,现在我们已经有了一个集群了,我们怎么知道每个集群在做什么呢?运行命令“describe topics”

  这是查看topic详情

> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0

  Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.

这是一个解释输出,第一行是所有分区的摘要,每一个线提供一个分区信息,因为我们只有一个分区,所有只有一条线。
  • "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
  • "replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
"leader":该节点负责所有指定分区的读和写,每个节点的领导都是随机选择的。
"replicas":备份的节点,无论该节点是否是leader或者目前是否还活着,只是显示。
"isr":备份节点的集合,也就是活着的节点集合。

  Note that in my example node 1 is the leader for the only partition of the topic.

  We can run the same command on the original topic we created to see where it is:

我们运行这个命令,看看一开始我们创建的那个节点:
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0

  So there is no surprise there—the original topic has no replicas and is on server 0, the only server in our cluster when we created it.

没有惊喜,刚才创建的topic(主题)没有Replicas,所以是0。

  Let's publish a few messages to our new topic:

让我们来发布一些信息在新的topic上:
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
...
my test message 1
my test message 2
^C

  Now let's consume these messages:

现在,消费这些消息。
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C

  Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:

我们要测试集群的容错,kill掉leader,Broker1作为当前的leader,也就是kill掉Broker1。
> ps aux | grep server-1.properties
7564 ttys002 0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
> kill -9 7564

On Windows use:(不推荐大家用)

> wmic process get processid,caption,commandline | find "java.exe" | find "server-1.properties"
java.exe java -Xmx1G -Xms1G -server -XX:+UseG1GC ... build\libs\kafka_2.10-0.10.2.0.jar" kafka.Kafka config\server-1.properties 644
> taskkill /pid 644 /f

  Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:

备份节点之一成为新的leader,而broker1已经不在同步备份集合里了。
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 1,2,0 Isr: 2,0

  But the messages are still available for consumption even though the leader that took the writes originally is down:

但是,消息仍然没丢:
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C

1.3 Quick Start中 Step 6: Setting up a multi-broker cluster官网剖析(博主推荐)的更多相关文章

  1. 1.3 Quick Start中 Step 8: Use Kafka Streams to process data官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Step 8: Use Kafka Streams to process data ...

  2. 1.3 Quick Start中 Step 7: Use Kafka Connect to import/export data官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Step 7: Use Kafka Connect to import/export ...

  3. 1.3 Quick Start中 Step 4: Send some messages官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Step 4: Send some messages Step : 发送消息 Kaf ...

  4. 1.3 Quick Start中 Step 2: Start the server官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Step 2: Start the server Step : 启动服务 Kafka ...

  5. 1.3 Quick Start中 Step 5: Start a consumer官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Step 5: Start a consumer Step : 消费消息 Kafka ...

  6. 1.3 Quick Start中 Step 3: Create a topic官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Step 3: Create a topic Step 3: 创建一个主题(topi ...

  7. 在Asp.Net中使用Redis【本文摘自智车芯官网】

    Redis安装 在安装之前需要获取Redis安装包.在这里我们就不详细介绍安装包的获取了.这里Redis-x64-3.2.100.zip安装包为例通过dos命令取安装.通过dos命令找到安装目录. 在 ...

  8. MQTT在平台中的应用【本文摘自智车芯官网】

    MQTT(Message Queuing Telemetry Transport,消息队列遥测传输)是IBM开发的一个即时通讯协议,有可能成为物联网的重要组成部分.该协议支持所有平台,几乎可以把所有联 ...

  9. 关于大数据项目创建时所需setting.xml(博主推荐)

    我目前,收录经常用的是,这两个版本,这个根据博主我本人的经验之谈,最为稳定和合理的. 注意:我的本地路径是在D:/SoftWare/maven/repository,大家自己改为你们自己的即可.   ...

随机推荐

  1. 全面解读Java中的枚举类型enum的使用

    这篇文章主要介绍了Java中的枚举类型enum的使用,开始之前先讲解了枚举的用处,然后还举了枚举在操作数据库时的实例,需要的朋友可以参考下 关于枚举 大多数地方写的枚举都是给一个枚举然后例子就开始sw ...

  2. 一种基于RBAC模型的动态访问控制改进方法

    本发明涉及一种基于RBAC模型的动态访问控制改进方法,属于访问控制领域.对原有RBAC模型进行了权限的改进和约束条件的改进,具体为将权限分为静态权限和动态权限,其中静态权限是非工作流的权限,动态权限是 ...

  3. Jeff Dean专访,有不少干货

    <专访Jeff Dean:我们要推动机器学习再上一层楼> 文件链接如下: Link https://arxiv.org/ 有意思的是,里面提到的 arXiv网站,一个能够用来证明论文上传时 ...

  4. 一起talk C栗子吧(第八十一回:C语言实例--进程停止)

    各位看官们,大家好,上一回中咱们说的是进程相互排斥的样例,这一回咱们说的样例是:进程停止.闲话休提,言归正转. 让我们一起talk C栗子吧! 我们在前面的章回中介绍了怎样创建进程,只是没有介绍停止进 ...

  5. ztree中依据后台中传过来的node的id,将这个node的复选框置为不可用

    var treeObj = $.fn.zTree.getZTreeObj("treeDemo");//树对象 var node = treeObj.getNodeByParam(& ...

  6. android ViewPager实现 跑马灯切换图片+多种切换动画

    近期在弄个项目.要求有跑马灯效果的图片展示. 网上搜了一堆,都没有完美实现的算了还是自己写吧! 实现原理利用 ViewPager 控件,这个控件本身就支持滑动翻页非常好非常强大好多功能都能用上它.利用 ...

  7. BZOJ1045: [HAOI2008]糖果传递&BZOJ1465: 糖果传递&BZOJ3293: [Cqoi2011]分金币

    [传送门:BZOJ1045&BZOJ1465&BZOJ3293] 简要题意: 给出n个数,每个数每次可以-1使得左边或者右边的数+1,代价为1,求出使得这n个数相等的最小代价 题解: ...

  8. thinkphp5项目--企业单车网站(七)

    thinkphp5项目--企业单车网站(七) 项目地址 fry404006308/BicycleEnterpriseWebsite: Bicycle Enterprise Websitehttps:/ ...

  9. pandas 下的 one hot encoder 及 pd.get_dummies() 与 sklearn.preprocessing 下的 OneHotEncoder 的区别

    sklearn.preprocessing 下除了提供 OneHotEncoder 还提供 LabelEncoder(简单地将 categorical labels 转换为不同的数字): 1. 简单区 ...

  10. linux 不常用命令及命令组合

    lsof:list open files, sudo lsof | grep deleted:则列出虽然被删除,但还处于打开状态的文件.注意,这些文件占用的空间,只有在这些文件关闭时,才会被释放. m ...