K8s的版本是1.7.6

采用nfs的nas存储模式

NFS的问题

  • 建立zk集群的时候总是发现myid绑定一个id,先describe pod确认每个绑定不同的pvc,然后就确认是pv创建的问题,pv创建不能直接挂在一个大的存储上面,因为大家最后的目录相同/var/lib/zookeeper/data目录,所以无论哪个pvc挂上去都是同样的目录,解决办法,建立不同的存储挂载目录,然后分别挂载pv
  • 建立pv的时候,指明storageClassName,比如
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nas-zk
nfs:
path: /k8s/weblogic
server: 192.168.0.103

在使用pvc的时候,也指明storageClassName,比如

 volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: nas-zk

这样就可以控制zk的存储使用的是带这个标签的pv

  • pv和pvc的accessModes一定要保持一致,否则找不到

建立zk集群脚本

https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/zookeeper.yaml

也可以参考

https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/

但是我的机器版本是1.7.6运行起来总是有问题,居然都不是一个个建立起来,而是一次性把所有的都建立起来。

改用脚本和镜像后问题才消失。

zk集群验证

for i in   ; do kubectl exec zk-$i -- hostname; done

zk-
zk-
zk- for i in ; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done myid zk- myid zk- myid zk- for i in ; do kubectl exec zk-$i -- hostname -f; done
zk-.zk-hs.default.svc.cluster.local
zk-.zk-hs.default.svc.cluster.local
zk-.zk-hs.default.svc.cluster.local

暴露服务给集群外

kubectl label pod zk- zkInst=
kubectl label pod zk- zkInst=
kubectl label pod zk- zkInst= kubectl expose po zk- --port= --target-port= --name=zk- --selector=zkInst= --type=NodePort
kubectl expose po zk- --port= --target-port= --name=zk- --selector=zkInst= --type=NodePort
kubectl expose po zk- --port= --target-port= --name=zk-2 --selector=zkInst=2 --type=NodePort

建立kafka集群

构建脚本

https://github.com/kubernetes/contrib/blob/master/statefulsets/kafka/kafka.yaml

验证

root@kafka-:/opt/kafka/config# kafka-topics.sh --create \
> --topic test \
> --zookeeper zoo-.zk.default.svc.cluster.local:,zoo-.zk.default.svc.cluster.local:,zoo-.zk.default.svc.cluster.local: \
> --partitions \
> --replication-factor Created topic "test".
root@kafka-:/opt/kafka/config# kafka-console-consumer.sh --topic test --bootstrap-server localhost:

root@kafka-:/# kafka-console-producer.sh --topic test --broker-list localhost:
I like kafka
hello world #在消费者侧显示为:
I like kafka
hello world

参考

https://cloud.tencent.com/developer/article/1005492

zk.yaml

---
apiVersion: v1
kind: Service
metadata:
name: zk-svc
labels:
app: zk-svc
spec:
ports:
- port:
name: server
- port:
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zk-cm
data:
jvm.heap: "1G"
tick: ""
init: ""
sync: ""
client.cnxns: ""
snap.retain: ""
purge.interval: ""
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
minAvailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-svc
replicas:
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8szk
imagePullPolicy: Always
image: gcr.io/google_samples/k8szk:v3
resources:
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort:
name: client
- containerPort:
name: server
- containerPort:
name: leader-election
env:
- name : ZK_REPLICAS
value: ""
- name : ZK_HEAP_SIZE
valueFrom:
configMapKeyRef:
name: zk-cm
key: jvm.heap
- name : ZK_TICK_TIME
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_INIT_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: init
- name : ZK_SYNC_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_MAX_CLIENT_CNXNS
valueFrom:
configMapKeyRef:
name: zk-cm
key: client.cnxns
- name: ZK_SNAP_RETAIN_COUNT
valueFrom:
configMapKeyRef:
name: zk-cm
key: snap.retain
- name: ZK_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
name: zk-cm
key: purge.interval
- name: ZK_CLIENT_PORT
value: ""
- name: ZK_SERVER_PORT
value: ""
- name: ZK_ELECTION_PORT
value: ""
command:
- sh
- -c
- zkGenConfig.sh && zkServer.sh start-foreground
readinessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds:
timeoutSeconds:
livenessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds:
timeoutSeconds:
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

kafka.yaml

---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka
spec:
ports:
- port:
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-svc
replicas:
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds:
containers:
- name: k8skafka
imagePullPolicy: Always
image: gcr.io/google_samples/k8skafka:v1
resources:
requests:
memory: "1Gi"
cpu: 500m
ports:
- containerPort:
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9093 \
--override zookeeper.connect=zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local: \
--override log.dir=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads= \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds= \
--override leader.imbalance.per.broker.percentage= \
--override log.flush.interval.messages= \
--override log.flush.offset.checkpoint.interval.ms= \
--override log.flush.scheduler.interval.ms= \
--override log.retention.bytes=- \
--override log.retention.hours= \
--override log.roll.hours= \
--override log.roll.jitter.hours= \
--override log.segment.bytes= \
--override log.segment.delete.delay.ms= \
--override message.max.bytes= \
--override min.insync.replicas= \
--override num.io.threads= \
--override num.network.threads= \
--override num.recovery.threads.per.data.dir= \
--override num.replica.fetchers= \
--override offset.metadata.max.bytes= \
--override offsets.commit.required.acks=- \
--override offsets.commit.timeout.ms= \
--override offsets.load.buffer.size= \
--override offsets.retention.check.interval.ms= \
--override offsets.retention.minutes= \
--override offsets.topic.compression.codec= \
--override offsets.topic.num.partitions= \
--override offsets.topic.replication.factor= \
--override offsets.topic.segment.bytes= \
--override queued.max.requests= \
--override quota.consumer.default= \
--override quota.producer.default= \
--override replica.fetch.min.bytes= \
--override replica.fetch.wait.max.ms= \
--override replica.high.watermark.checkpoint.interval.ms= \
--override replica.lag.time.max.ms= \
--override replica.socket.receive.buffer.bytes= \
--override replica.socket.timeout.ms= \
--override request.timeout.ms= \
--override socket.receive.buffer.bytes= \
--override socket.request.max.bytes= \
--override socket.send.buffer.bytes= \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms= \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms= \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries= \
--override controlled.shutdown.retry.backoff.ms= \
--override controller.socket.timeout.ms= \
--override default.replication.factor= \
--override fetch.purgatory.purge.interval.requests= \
--override group.max.session.timeout.ms= \
--override group.min.session.timeout.ms= \
--override inter.broker.protocol.version=0.10.-IV0 \
--override log.cleaner.backoff.ms= \
--override log.cleaner.dedupe.buffer.size= \
--override log.cleaner.delete.retention.ms= \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size= \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms= \
--override log.cleaner.threads= \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes= \
--override log.index.size.max.bytes= \
--override log.message.timestamp.difference.max.ms= \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms= \
--override max.connections.per.ip= \
--override num.partitions= \
--override producer.purgatory.purge.interval.requests= \
--override replica.fetch.backoff.ms= \
--override replica.fetch.max.bytes= \
--override replica.fetch.response.max.bytes= \
--override reserved.broker.max.id= "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

StatefulSet在ZooKeeper和Kafka的实践的更多相关文章

  1. 【译】Kafka最佳实践 / Kafka Best Practices

    本文来自于DataWorks Summit/Hadoop Summit上的<Apache Kafka最佳实践>分享,里面给出了很多关于Kafka的使用心得,非常值得一看,今推荐给大家. 硬 ...

  2. Openshift部署Zookeeper和Kafka

    部署Zookeeper github网址 https://github.com/ericnie2015/zookeeper-k8s-openshift 1.在openshift目录中,首先构建imag ...

  3. Kafka应用实践与生态集成

    1.前言 Apache Kafka发展至今,已经是一个很成熟的消息队列组件了,也是大数据生态圈中不可或缺的一员.Apache Kafka社区非常的活跃,通过社区成员不断的贡献代码和迭代项目,使得Apa ...

  4. Zookeeper与Kafka Kafka

    Zookeeper与Kafka Kafka Kafka SocketServer是基于Java NIO开发的,采用了Reactor的模式(已被大量实践证明非常高效,在Netty和Mina中广泛使用). ...

  5. Zookeeper与Kafka集群搭建

    一 :环境准备: 物理机window7 64位 vmware 3个虚拟机 centos6.8  IP为:192.168.17.[129 -131] JDK1.7安装配置 各虚拟机之间配置免密登录 安装 ...

  6. Java curator操作zookeeper获取kafka

    Java curator操作zookeeper获取kafka Curator是Netflix公司开源的一个Zookeeper客户端,与Zookeeper提供的原生客户端相比,Curator的抽象层次更 ...

  7. HyperLedger Fabric基于zookeeper和kafka集群配置解析

    简述 在搭建HyperLedger Fabric环境的过程中,我们会用到一个configtx.yaml文件(可参考Hyperledger Fabric 1.0 从零开始(八)--Fabric多节点集群 ...

  8. AWS EC2 CentOS release 6.5 部署zookeeper、kafka、dubbo

    AWS EC2 CentOS release 6.5 部署zookeeper.kafka.dubbo参考:http://blog.csdn.net/yizezhong/article/details/ ...

  9. zookeeper和kafka的使用

    zookeeper使用和原理探究(一) http://www.blogjava.net/BucketLi/archive/2010/12/21/341268.html zookeeper的作用和原理讲 ...

随机推荐

  1. PHP-5.3.27源码安装及nginx-fastcgi配置

    源码安装php cat /etc/redhat-release uname -rm wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.c ...

  2. leetcode 之Reverse Nodes in k-Group(22)

    这题有点繁琐,在更新指针时很容易出错. ListNode *reverseKGroup(ListNode *head, int k) { )return head; ListNode dummy(-) ...

  3. JVM字节码执行引擎和动态绑定原理

    1.执行引擎 所有Java虚拟机的执行引擎都是一致的: 输入的是字节码文件,处理过程就是解析过程,最后输出执行结果. 在整个过程不同的数据在不同的结构中进行处理. 2.栈帧 jvm进行方法调用和方法执 ...

  4. 如何设置WordPress文章特色图像(Featured Image)

    WordPress的特色图像(Featured Image)是一个很方便的功能,过去为了给每篇文章设置一个缩略图,我们需要用脚本去匹配文章中的第一张或者最后一张图片,或者通过附件方式获取图片,有了特色 ...

  5. Linux下文件属性

    在Linux下输入命令ls -l /etc/termcap /root/install.log,我们经常看到,这后面的一串内容具体是什么含义呢?[root@www ~]# ls -l /etc/ter ...

  6. Python基础系列----环境的搭建及简单输入、输出

    1.Python                                                                                         以下信 ...

  7. [译]怎样在Vue.js中使用jquery插件

    原文:http://gambardella.info/2016/09/05/guide-how-to-use-vue-js-with-jquery-plugins 使用Vue真的太棒了,但是也有可能使 ...

  8. Bootstrap Table 使用示例及代码

    http://issues.wenzhixin.net.cn/bootstrap-table/ <!DOCTYPE html> <html> <head> < ...

  9. Struts2 简单的上传文件并且显示图片

    代码结构: UploadAction.java package com.action; import java.io.File; import java.io.FileInputStream; imp ...

  10. 按照grouip分组,之后分组调用生成正式凭证 的接口

    按照grouip分组,之后分组调用生成正式凭证 的接口 Map<String, List<OperatingLogVO>> resultMap = new HashMap< ...