kubernetes之StatefulSet部署zk和kafka
前提
- 至少需要三个node节点,否则修改亲和性配置
- 如果外部访问,需要自己暴露
- 需要有个storageClass,这样做的原因是避免手动创建pv了
部署zk和kafka
参考:
https://www.cnblogs.com/ericnie/p/8562561.html
https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/zookeeper.yaml
部署zookeeper
zookeeper.yaml ,这个文件太长,不方便查看,我们可以下载这个https://raw.githubusercontent.com/kubernetes-retired/contrib/master/statefulsets/zookeeper/zookeeper.yaml ,进行修改,主要修改的位置,
1、这个是Always,我认为没必要浪费时间每次都要去下载,所以这个里我修改成了IfNotPresent。 imagePullPolicy: IfNotPresent
2、原文件需要翻墙,这里我把原镜像下载到我自己的仓库了。所以得修改镜像地址。 registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8szk:v3
3、因为我们用了storageClass,所以volumeClaimTemplates中加上storageClassName。根据自己的情况修改。
---
apiVersion: v1
kind: Service
metadata:
name: zk-svc
labels:
app: zk-svc
spec:
ports:
- port:
name: server
- port:
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zk-cm
data:
jvm.heap: "1G"
tick: ""
init: ""
sync: ""
client.cnxns: ""
snap.retain: ""
purge.interval: ""
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
minAvailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-svc
replicas:
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8szk
imagePullPolicy: IfNotPresent
image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8szk:v3
resources:
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort:
name: client
- containerPort:
name: server
- containerPort:
name: leader-election
env:
- name : ZK_REPLICAS
value: ""
- name : ZK_HEAP_SIZE
valueFrom:
configMapKeyRef:
name: zk-cm
key: jvm.heap
- name : ZK_TICK_TIME
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_INIT_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_MAX_CLIENT_CNXNS
valueFrom:
configMapKeyRef:
name: zk-cm
key: client.cnxns
- name: ZK_SNAP_RETAIN_COUNT
valueFrom:
configMapKeyRef:
name: zk-cm
key: snap.retain
- name: ZK_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
name: zk-cm
key: purge.interval
- name: ZK_CLIENT_PORT
value: ""
- name: ZK_SERVER_PORT
value: ""
- name: ZK_ELECTION_PORT
value: ""
command:
- sh
- -c
- zkGenConfig.sh && zkServer.sh start-foreground
readinessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds:
timeoutSeconds:
livenessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds:
timeoutSeconds:
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: course-nfs-storage
kubectl apply -f zookeeper.yaml
zk集群验证:
$ for i in ; do kubectl exec zk-$i -- hostname; done
zk-
zk-
zk-
$ for i in ; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
myid zk- myid zk- myid zk-
for i in ; do kubectl exec zk-$i -- hostname -f; done
zk-.zk-svc.default.svc.cluster.local
zk-.zk-svc.default.svc.cluster.local
zk-.zk-svc.default.svc.cluster.local
暴露外部服务
kubectl label pod zk- zkInst=
kubectl label pod zk- zkInst=
kubectl label pod zk- zkInst= kubectl expose po zk- --port= --target-port= --name=zk- --selector=zkInst= --type=NodePort
kubectl expose po zk- --port= --target-port= --name=zk- --selector=zkInst= --type=NodePort
kubectl expose po zk- --port= --target-port= --name=zk- --selector=zkInst= --type=NodePort
部署kafka
可根据https://raw.githubusercontent.com/kubernetes-retired/contrib/master/statefulsets/kafka/kafka.yaml 进行修改
修改的内容:
1、imagePullPolicy: IfNotPresent
2、image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8skafka:v1
3、storageClassName: course-nfs-storage
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka
spec:
ports:
- port:
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-svc
replicas:
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds:
containers:
- name: k8skafka
imagePullPolicy: IfNotPresent
image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8skafka:v1
resources:
requests:
memory: "1Gi"
cpu: 500m
ports:
- containerPort:
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9093 \
--override zookeeper.connect=zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local: \
--override log.dir=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads= \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds= \
--override leader.imbalance.per.broker.percentage= \
--override log.flush.interval.messages= \
--override log.flush.offset.checkpoint.interval.ms= \
--override log.flush.scheduler.interval.ms= \
--override log.retention.bytes=- \
--override log.retention.hours= \
--override log.roll.hours= \
--override log.roll.jitter.hours= \
--override log.segment.bytes= \
--override log.segment.delete.delay.ms= \
--override message.max.bytes= \
--override min.insync.replicas= \
--override num.io.threads= \
--override num.network.threads= \
--override num.recovery.threads.per.data.dir= \
--override num.replica.fetchers= \
--override offset.metadata.max.bytes= \
--override offsets.commit.required.acks=- \
--override offsets.commit.timeout.ms= \
--override offsets.load.buffer.size= \
--override offsets.retention.check.interval.ms= \
--override offsets.retention.minutes= \
--override offsets.topic.compression.codec= \
--override offsets.topic.num.partitions= \
--override offsets.topic.replication.factor= \
--override offsets.topic.segment.bytes= \
--override queued.max.requests= \
--override quota.consumer.default= \
--override quota.producer.default= \
--override replica.fetch.min.bytes= \
--override replica.fetch.wait.max.ms= \
--override replica.high.watermark.checkpoint.interval.ms= \
--override replica.lag.time.max.ms= \
--override replica.socket.receive.buffer.bytes= \
--override replica.socket.timeout.ms= \
--override request.timeout.ms= \
--override socket.receive.buffer.bytes= \
--override socket.request.max.bytes= \
--override socket.send.buffer.bytes= \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms= \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms= \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries= \
--override controlled.shutdown.retry.backoff.ms= \
--override controller.socket.timeout.ms= \
--override default.replication.factor= \
--override fetch.purgatory.purge.interval.requests= \
--override group.max.session.timeout.ms= \
--override group.min.session.timeout.ms= \
--override inter.broker.protocol.version=0.10.-IV0 \
--override log.cleaner.backoff.ms= \
--override log.cleaner.dedupe.buffer.size= \
--override log.cleaner.delete.retention.ms= \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size= \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms= \
--override log.cleaner.threads= \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes= \
--override log.index.size.max.bytes= \
--override log.message.timestamp.difference.max.ms= \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms= \
--override max.connections.per.ip= \
--override num.partitions= \
--override producer.purgatory.purge.interval.requests= \
--override replica.fetch.backoff.ms= \
--override replica.fetch.max.bytes= \
--override replica.fetch.response.max.bytes= \
--override reserved.broker.max.id= "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: course-nfs-storage
kubectl apply -f kafka.yaml
集群验证:
进入kafka集群其中的一个pod
root@kafka-0 $ cd /opt/kafka/config
root@kafka-0 $ kafka-topics.sh --create --topic test --zookeeper zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local:,zk-.zk-svc.default.svc.cluster.local: --partitions --replication-factor
Created topic "test".
root@kafka-0 $ kafka-console-producer.sh --topic test --broker-list localhost:
I like kafka
hello world
root@kafka- $ kafka-console-producer.sh --topic test --broker-list localhost:
kubernetes之StatefulSet部署zk和kafka的更多相关文章
- kubernetes使用statefulset部署mongoDB 单机版 自定义配置文件、密码等
注: 官方镜像地址: https://hub.docker.com/_/mongo?tab=description docker版的mongo移除了默认的/etc/mongo.conf, 修改了db数 ...
- docker-compose部署zk和kafka
version: '3.4' services: zk1: image: zookeeper restart: always hostname: zk1 container_name: zk1 por ...
- Openshift部署Zookeeper和Kafka
部署Zookeeper github网址 https://github.com/ericnie2015/zookeeper-k8s-openshift 1.在openshift目录中,首先构建imag ...
- k8s 上使用 StatefulSet 部署 zookeeper 集群
目录 StatefulSet 部署 zookeeper 集群 创建pv StatefulSet 测试 StatefulSet 部署 zookeeper 集群 参考 k8s官网zookeeper集群的部 ...
- 12 . Kubernetes之Statefulset 和 Operator
Statefulset简介 k8s权威指南这样介绍的 "在Kubernetes系统中,Pod的管理对象RC.Deployment.DaemonSet和Job都面向无状态的服务.但现实中有很多 ...
- AWS EC2 CentOS release 6.5 部署zookeeper、kafka、dubbo
AWS EC2 CentOS release 6.5 部署zookeeper.kafka.dubbo参考:http://blog.csdn.net/yizezhong/article/details/ ...
- 教你在Kubernetes中快速部署ES集群
摘要:ES集群是进行大数据存储和分析,快速检索的利器,本文简述了ES的集群架构,并提供了在Kubernetes中快速部署ES集群的样例:对ES集群的监控运维工具进行了介绍,并提供了部分问题定位经验,最 ...
- kubernetes集群部署
鉴于Docker如此火爆,Google推出kubernetes管理docker集群,不少人估计会进行尝试.kubernetes得到了很多大公司的支持,kubernetes集群部署工具也集成了gce,c ...
- Kubernetes集群部署关键知识总结
Kubernetes集群部署需要安装的组件东西很多,过程复杂,对服务器环境要求很苛刻,最好是能连外网的环境下安装,有些组件还需要连google服务器下载,这一点一般很难满足,因此最好是能提前下载好准备 ...
随机推荐
- 单元测试框架之unittest(五)
一.摘要 单元测试里很重要的一个部分就是断言,unittest为我们提供了很多断言方法,断言方法分为三类,一种是用来断言被测试的方法的,另一种是测试是否抛正确异常的,第三种是用来断言日志是否包含应有信 ...
- Mysql 语法豆知识
https://www.cnblogs.com/chentianwei/p/8093748.html mysql增加了大量语法,以前没有接触过.比如 create function begin..en ...
- Mybatis 动态SQL注解 in操作符的用法
在SQL语法中如果我们想使用in的话直接可以像如下一样使用: ,,) ; ,,) ; 但是如果在MyBatis中的使用 in 操作符,像下面这样写的话,肯定会报错: @Update("upd ...
- Nginx中ngx_http_log_module模块
指定⽇日志格式记录请求指令: access_log设置缓冲⽇日志写⼊入的路路径,格式和配置Syntax: access_log path [format[buffer=size] [gzip[=lev ...
- appium+python 【Mac】UI自动化测试封装框架流程简介 <一>
为了多人之间更方便的协作,那么框架本身的结构和编写方式将变得很重要,因此每个团队都有适合自己的框架.如下本人对APP的UI自动化测试的框架进行进行了简单的汇总.主要目的是为了让团队中的其余人员接手写脚 ...
- 详解Python 切片语法
Python的切片是特别常用的功能,主要用于对列表的元素取值.这篇文章主要介绍了详解Python 切片语法,需要的朋友可以参考下 Python的切片是特别常用的功能,主要用于对列表的元素取值.使用切片 ...
- JQuery中 text()、html() 以及 val()以及innerText、innerHTML和value
设置内容 - text().html() 以及 val() 我们将使用前一章中的三个相同的方法来设置内容: text() - 设置或返回所选元素的文本内容 html() - 设置或返回所选元素的内容( ...
- python3.6中 字典类型和字符串类型互相转换的方法
mydic = {"俄罗斯": {"1":"圣彼得堡", "2":"叶卡捷琳堡", "3& ...
- Java BIO、NIO、AIO 原理
先来个例子理解一下概念,以银行取款为例: 同步 : 自己亲自出马持银行卡到银行取钱(使用同步IO时,Java自己处理IO读写). 异步 : 委托一小弟拿银行卡到银行取钱,然后给你(使用异步IO时,Ja ...
- 031_检测 MySQL 服务是否存活
#!/bin/bash#host 为你需要检测的 MySQL 主机的 IP 地址,user 为 MySQL 账户名,passwd 为密码#这些信息需要根据实际情况修改后方可使用 host=127.0. ...