K8s的版本是1.7.6

采用nfs的nas存储模式

NFS的问题

  • 建立zk集群的时候总是发现myid绑定一个id,先describe pod确认每个绑定不同的pvc,然后就确认是pv创建的问题,pv创建不能直接挂在一个大的存储上面,因为大家最后的目录相同/var/lib/zookeeper/data目录,所以无论哪个pvc挂上去都是同样的目录,解决办法,建立不同的存储挂载目录,然后分别挂载pv
  • 建立pv的时候,指明storageClassName,比如
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nas-zk
nfs:
path: /k8s/weblogic
server: 192.168.0.103

在使用pvc的时候,也指明storageClassName,比如

 volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: nas-zk

这样就可以控制zk的存储使用的是带这个标签的pv

  • pv和pvc的accessModes一定要保持一致,否则找不到

建立zk集群脚本

https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/zookeeper.yaml

也可以参考

https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/

但是我的机器版本是1.7.6运行起来总是有问题,居然都不是一个个建立起来,而是一次性把所有的都建立起来。

改用脚本和镜像后问题才消失。

zk集群验证

for i in   ; do kubectl exec zk-$i -- hostname; done

zk-
zk-
zk- for i in ; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done myid zk- myid zk- myid zk- for i in ; do kubectl exec zk-$i -- hostname -f; done
zk-.zk-hs.default.svc.cluster.local
zk-.zk-hs.default.svc.cluster.local
zk-.zk-hs.default.svc.cluster.local

暴露服务给集群外

kubectl label pod zk- zkInst=
kubectl label pod zk- zkInst=
kubectl label pod zk- zkInst= kubectl expose po zk- --port= --target-port= --name=zk- --selector=zkInst= --type=NodePort
kubectl expose po zk- --port= --target-port= --name=zk- --selector=zkInst= --type=NodePort
kubectl expose po zk- --port= --target-port= --name=zk-2 --selector=zkInst=2 --type=NodePort

建立kafka集群

构建脚本

https://github.com/kubernetes/contrib/blob/master/statefulsets/kafka/kafka.yaml

验证

root@kafka-:/opt/kafka/config# kafka-topics.sh --create \
> --topic test \
> --zookeeper zoo-.zk.default.svc.cluster.local:,zoo-.zk.default.svc.cluster.local:,zoo-.zk.default.svc.cluster.local: \
> --partitions \
> --replication-factor Created topic "test".
root@kafka-:/opt/kafka/config# kafka-console-consumer.sh --topic test --bootstrap-server localhost:

root@kafka-:/# kafka-console-producer.sh --topic test --broker-list localhost:
I like kafka
hello world #在消费者侧显示为:
I like kafka
hello world

参考

https://cloud.tencent.com/developer/article/1005492

zk.yaml

---
apiVersion: v1
kind: Service
metadata:
name: zk-svc
labels:
app: zk-svc
spec:
ports:
- port:
name: server
- port:
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zk-cm
data:
jvm.heap: "1G"
tick: ""
init: ""
sync: ""
client.cnxns: ""
snap.retain: ""
purge.interval: ""
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
minAvailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-svc
replicas:
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8szk
imagePullPolicy: Always
image: gcr.io/google_samples/k8szk:v3
resources:
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort:
name: client
- containerPort:
name: server
- containerPort:
name: leader-election
env:
- name : ZK_REPLICAS
value: ""
- name : ZK_HEAP_SIZE
valueFrom:
configMapKeyRef:
name: zk-cm
key: jvm.heap
- name : ZK_TICK_TIME
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_INIT_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: init
- name : ZK_SYNC_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_MAX_CLIENT_CNXNS
valueFrom:
configMapKeyRef:
name: zk-cm
key: client.cnxns
- name: ZK_SNAP_RETAIN_COUNT
valueFrom:
configMapKeyRef:
name: zk-cm
key: snap.retain
- name: ZK_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
name: zk-cm
key: purge.interval
- name: ZK_CLIENT_PORT
value: ""
- name: ZK_SERVER_PORT
value: ""
- name: ZK_ELECTION_PORT
value: ""
command:
- sh
- -c
- zkGenConfig.sh && zkServer.sh start-foreground
readinessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds:
timeoutSeconds:
livenessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds:
timeoutSeconds:
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

kafka.yaml

---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka
spec:
ports:
- port:
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-svc
replicas:
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds:
containers:
- name: k8skafka
imagePullPolicy: Always
image: gcr.io/google_samples/k8skafka:v1
resources:
requests:
memory: "1Gi"
cpu: 500m
ports:
- containerPort:
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9093 \
--override zookeeper.connect=zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local: \
--override log.dir=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads= \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds= \
--override leader.imbalance.per.broker.percentage= \
--override log.flush.interval.messages= \
--override log.flush.offset.checkpoint.interval.ms= \
--override log.flush.scheduler.interval.ms= \
--override log.retention.bytes=- \
--override log.retention.hours= \
--override log.roll.hours= \
--override log.roll.jitter.hours= \
--override log.segment.bytes= \
--override log.segment.delete.delay.ms= \
--override message.max.bytes= \
--override min.insync.replicas= \
--override num.io.threads= \
--override num.network.threads= \
--override num.recovery.threads.per.data.dir= \
--override num.replica.fetchers= \
--override offset.metadata.max.bytes= \
--override offsets.commit.required.acks=- \
--override offsets.commit.timeout.ms= \
--override offsets.load.buffer.size= \
--override offsets.retention.check.interval.ms= \
--override offsets.retention.minutes= \
--override offsets.topic.compression.codec= \
--override offsets.topic.num.partitions= \
--override offsets.topic.replication.factor= \
--override offsets.topic.segment.bytes= \
--override queued.max.requests= \
--override quota.consumer.default= \
--override quota.producer.default= \
--override replica.fetch.min.bytes= \
--override replica.fetch.wait.max.ms= \
--override replica.high.watermark.checkpoint.interval.ms= \
--override replica.lag.time.max.ms= \
--override replica.socket.receive.buffer.bytes= \
--override replica.socket.timeout.ms= \
--override request.timeout.ms= \
--override socket.receive.buffer.bytes= \
--override socket.request.max.bytes= \
--override socket.send.buffer.bytes= \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms= \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms= \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries= \
--override controlled.shutdown.retry.backoff.ms= \
--override controller.socket.timeout.ms= \
--override default.replication.factor= \
--override fetch.purgatory.purge.interval.requests= \
--override group.max.session.timeout.ms= \
--override group.min.session.timeout.ms= \
--override inter.broker.protocol.version=0.10.-IV0 \
--override log.cleaner.backoff.ms= \
--override log.cleaner.dedupe.buffer.size= \
--override log.cleaner.delete.retention.ms= \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size= \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms= \
--override log.cleaner.threads= \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes= \
--override log.index.size.max.bytes= \
--override log.message.timestamp.difference.max.ms= \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms= \
--override max.connections.per.ip= \
--override num.partitions= \
--override producer.purgatory.purge.interval.requests= \
--override replica.fetch.backoff.ms= \
--override replica.fetch.max.bytes= \
--override replica.fetch.response.max.bytes= \
--override reserved.broker.max.id= "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

StatefulSet在ZooKeeper和Kafka的实践的更多相关文章

  1. 【译】Kafka最佳实践 / Kafka Best Practices

    本文来自于DataWorks Summit/Hadoop Summit上的<Apache Kafka最佳实践>分享,里面给出了很多关于Kafka的使用心得,非常值得一看,今推荐给大家. 硬 ...

  2. Openshift部署Zookeeper和Kafka

    部署Zookeeper github网址 https://github.com/ericnie2015/zookeeper-k8s-openshift 1.在openshift目录中,首先构建imag ...

  3. Kafka应用实践与生态集成

    1.前言 Apache Kafka发展至今,已经是一个很成熟的消息队列组件了,也是大数据生态圈中不可或缺的一员.Apache Kafka社区非常的活跃,通过社区成员不断的贡献代码和迭代项目,使得Apa ...

  4. Zookeeper与Kafka Kafka

    Zookeeper与Kafka Kafka Kafka SocketServer是基于Java NIO开发的,采用了Reactor的模式(已被大量实践证明非常高效,在Netty和Mina中广泛使用). ...

  5. Zookeeper与Kafka集群搭建

    一 :环境准备: 物理机window7 64位 vmware 3个虚拟机 centos6.8  IP为:192.168.17.[129 -131] JDK1.7安装配置 各虚拟机之间配置免密登录 安装 ...

  6. Java curator操作zookeeper获取kafka

    Java curator操作zookeeper获取kafka Curator是Netflix公司开源的一个Zookeeper客户端,与Zookeeper提供的原生客户端相比,Curator的抽象层次更 ...

  7. HyperLedger Fabric基于zookeeper和kafka集群配置解析

    简述 在搭建HyperLedger Fabric环境的过程中,我们会用到一个configtx.yaml文件(可参考Hyperledger Fabric 1.0 从零开始(八)--Fabric多节点集群 ...

  8. AWS EC2 CentOS release 6.5 部署zookeeper、kafka、dubbo

    AWS EC2 CentOS release 6.5 部署zookeeper.kafka.dubbo参考:http://blog.csdn.net/yizezhong/article/details/ ...

  9. zookeeper和kafka的使用

    zookeeper使用和原理探究(一) http://www.blogjava.net/BucketLi/archive/2010/12/21/341268.html zookeeper的作用和原理讲 ...

随机推荐

  1. css设置div等标签背景半透明

    三种方式: 1. background-color: transparent; 直接设置背景为透明 2.这种是子元素也会跟着变成半透明 /* 背景半透明,1为不透明 */ opacity: 0.5; ...

  2. 《逐梦旅程 WINDOWS游戏编程之从零开始》笔记6——四大变换&光照与材质

    第13章 四大变换 在Direct3D中,如果为进行任何空间坐标变换而直接绘图的话,图形将始终处于应用程序窗口的中心位置,默认这个位置就成为世界坐标系的原点(0,0,0).而且我们也不能改变观察图形的 ...

  3. QT中ui更改后不能更新的解决方法

    ui源文件到界面显示的原理可以网上搜索,这里不再描述.简单讲就是先要从*.ui生成ui_*.h然后再编译,所以界面未更新实际上是因为ui_*.h这个文件没有更新导致的. 出现此问题后我尝试了以下几个方 ...

  4. 前端获得session信息方式对比,优化

    在开发中,页面 js 经常会遇到需要 当前登录用户信息(菜单权限,用户基本信息,配置信息) 的地方,一般情况我们可能对这些信息获取方式不是太在意,但是现在的前端通过webpack打包,即使做了代码分割 ...

  5. AC日记——大爷的字符串题 洛谷 P3709

    大爷的字符串题 思路: 莫队,需开O2,不开50: 代码: #include <bits/stdc++.h> using namespace std; #define maxn 20000 ...

  6. Linux打包压缩

    zip: 打包:zip something.zip something (目录请加 -r 参数) 解包:unzip something.zip 指定路径:-d 参数 tar: 打包:tar -zcvf ...

  7. 运行hadoop的时候提示物理内存或虚拟内存溢出的解决方案running beyond physical memory或者beyond vitual memory limits

    当运行中出现Container is running beyond physical memory这个问题出现主要是因为物理内存不足导致的,在执行mapreduce的时候,每个map和reduce都有 ...

  8. 基于SAAJ的客户端

    概述 SAAJ - SOAP with Attachments API for JAVA 结构图如下: 正文 1. 如何获取soap请求的关键参数 关键的参数有四个: xmlns - xml命名空间如 ...

  9. Container With Most Water(LintCode)

    Container With Most Water Given n non-negative integers a1, a2, ..., an, where each represents a poi ...

  10. Codeforces Round #433 (Div. 2, based on Olympiad of Metropolises) D. Jury Meeting(双指针模拟)

    D. Jury Meeting time limit per test 1 second memory limit per test 512 megabytes input standard inpu ...