ceph kubernetes中使用
1.在管理节点上,进入刚创建的放置配置文件的目录,用 ceph-deploy 执行如下步骤
mkdir /opt/cluster-ceph cd /opt/cluster-ceph ceph-deploy new master1 master2 master3
2.添加epel源
yum install -y yum-utils && yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && yum install --nogpgcheck -y epel-release && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && rm -f /etc/yum.repos.d/dl.fedoraproject.org*
直接进行ceph的安装,会报如下的错误:
--> Finished Dependency Resolution
Error: Package: :ceph-common-10.2.-.el7.x86_64 (ceph)
Requires: libbabeltrace-ctf.so.()(64bit)
Error: Package: :ceph-osd-10.2.-.el7.x86_64 (ceph)
Requires: libleveldb.so.()(64bit)
Error: Package: :ceph-mon-10.2.-.el7.x86_64 (ceph)
Requires: libleveldb.so.()(64bit)
Error: Package: :librbd1-10.2.-.el7.x86_64 (ceph)
Requires: liblttng-ust.so.()(64bit)
Error: Package: :ceph-base-10.2.-.el7.x86_64 (ceph)
Requires: liblttng-ust.so.()(64bit)
Error: Package: :librgw2-10.2.-.el7.x86_64 (ceph)
Requires: libfcgi.so.()(64bit)
Error: Package: :ceph-common-10.2.-.el7.x86_64 (ceph)
Requires: libbabeltrace.so.()(64bit)
Error: Package: :librados2-10.2.-.el7.x86_64 (ceph)
Requires: liblttng-ust.so.()(64bit)
2. 安装 Ceph
[root@localhost ~]# yum install --downloadonly --downloaddir=/tmp/ceph ceph 在每台主机上安装ceph
[root@localhost ~]# yum localinstall -C -y --disablerepo=* /tmp/ceph/*.rpm
配置初始 monitor(s)、并收集所有密钥
# 请务必在 ceph-cluster 目录下 [root@admin ceph-cluster]# ceph-deploy mon create-initial
初始化 ceph.osd 节点
创建存储空间
[root@osd1 ~]# mkdir -p /data/ceph-osd
[root@osd1 ~]# chown ceph.ceph /data/ceph-osd/ -R [root@osd2 ~]# mkdir -p /data/ceph-osd
[root@osd2 ~]# chown ceph.ceph /data/ceph-osd/ -R
[root@osd3 ~]# mkdir -p /data/ceph-osd
[root@osd3 ~]# chown ceph.ceph /data/ceph-osd/ -R
[root@osd4 ~]# mkdir -p /data/ceph-osd
[root@osd4 ~]# chown ceph.ceph /data/ceph-osd/ -R
创建OSD:
[root@admin ceph-cluster]# ceph-deploy osd prepare node1:/data/ceph-osd node2:/data/ceph-osd node3:/data/ceph-osd node4:/data/ceph-osd 激活 OSD
[root@admin ceph-cluster]# ceph-deploy osd activate node1:/data/ceph-osd node2:/data/ceph-osd node3:/data/ceph-osd node4:/data/ceph-osd
用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了 [root@admin ceph-cluster]# ceph-deploy admin master1 master2 master3 node1 node2 node3 node4 确保你对 ceph.client.admin.keyring 有正确的操作权限。 chmod +r /etc/ceph/ceph.client.admin.keyring (所有机器) 如果配置文件更改,需要同步配置文件到所有节点 [root@admin ceph-cluster]# ceph-deploy --overwrite-conf admin master1 master2 master3 node1 node2 node3 node4
官网的zookeeper yaml,去掉了亲和性
apiVersion: v1
kind: Service
metadata:
namespace: testsubject
name: zk-hs
labels:
app: zk
spec:
ports:
- port:
name: server
- port:
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
namespace: testsubject
name: zk-cs
labels:
app: zk
spec:
ports:
- port:
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
namespace: testsubject
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable:
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
namespace: testsubject
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas:
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk
spec:
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "192.168.200.10/senyint/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "4Gi"
cpu: ""
ports:
- containerPort:
name: client
- containerPort:
name: server
- containerPort:
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers= \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port= \
--election_port= \
--server_port= \
--tick_time= \
--init_limit= \
--sync_limit= \
--heap=512M \
--max_client_cnxns= \
--snap_retain_count= \
--purge_interval= \
--max_session_timeout= \
--min_session_timeout= \
--log_level=OFF"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
volumeMounts:
- name: datazk
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datazk
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: ceph-rbd-database
resources:
requests:
storage: 20Gi
ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin-testsubject
namespace: testsubject
type: "kubernetes.io/rbd"
data:
key: QVFERkcvQmF5ckFkSnhBQVVkM2VCdC82K3dOTnZIM3V0ZHpnTnc9PQo=
rbd-storage-data-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-database
namespace: testsubject
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.200.11:,192.168.200.12:,192.168.200.13:
adminId: admin
adminSecretName: ceph-secret-admin-testsubject
adminSecretNamespace: "testsubject"
pool: fengjian
userId: admin
userSecretName: ceph-secret-admin-testsubject
imageFormat: ""
imageFeatures: "layering"
不建立pv, 直接 使用storageclass,然后建立pvc, deployment 指定 claimName
ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
key: QVFERkcvQmF5ckFkSnhBQVVkM2VCdC82K3dOTnZIM3V0ZHpnTnc9PQo=
rbd-storage-data-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-provisioner
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.200.11:6789,192.168.200.12:6789,192.168.200.13:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: default
pool: fengjian
userId: admin
userSecretName: ceph-secret-admin
imageFormat: "2"
imageFeatures: "layering"
redis.yaml
apiVersion: apps/v1 # for versions before 1.9. use apps/v1beta2
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas:
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: 192.168.200.10/redis/redis:master
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort:
volumeMounts:
- name: datadir
mountPath: /data
volumes:
- name: datadir
persistentVolumeClaim:
claimName: redis-master-rbd-pvc --- apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port:
targetPort:
selector:
app: redis
role: master
tier: backend
kafka配置文件
参考 : https://kow3ns.github.io/kubernetes-kafka/manifests/
[root@master1 ceph_rbd]# cat kafka.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-hs
labels:
app: kafka
spec:
ports:
- port:
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
maxUnavailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-hs
replicas:
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds:
containers:
- name: k8skafka
imagePullPolicy: Always
image: 192.168.200.10/source/kubernetes-kafka:1.0-10.2.
resources:
requests:
memory: "12Gi"
cpu:
ports:
- containerPort:
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9093 \
--override zookeeper.connect=zk-cs.default.svc.cluster.local: \
--override log.dir=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads= \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds= \
--override leader.imbalance.per.broker.percentage= \
--override log.flush.interval.messages= \
--override log.flush.offset.checkpoint.interval.ms= \
--override log.flush.scheduler.interval.ms= \
--override log.retention.bytes=- \
--override log.retention.hours= \
--override log.roll.hours= \
--override log.roll.jitter.hours= \
--override log.segment.bytes= \
--override log.segment.delete.delay.ms= \
--override message.max.bytes= \
--override min.insync.replicas= \
--override num.io.threads= \
--override num.network.threads= \
--override num.recovery.threads.per.data.dir= \
--override num.replica.fetchers= \
--override offset.metadata.max.bytes= \
--override offsets.commit.required.acks=- \
--override offsets.commit.timeout.ms= \
--override offsets.load.buffer.size= \
--override offsets.retention.check.interval.ms= \
--override offsets.retention.minutes= \
--override offsets.topic.compression.codec= \
--override offsets.topic.num.partitions= \
--override offsets.topic.replication.factor= \
--override offsets.topic.segment.bytes= \
--override queued.max.requests= \
--override quota.consumer.default= \
--override quota.producer.default= \
--override replica.fetch.min.bytes= \
--override replica.fetch.wait.max.ms= \
--override replica.high.watermark.checkpoint.interval.ms= \
--override replica.lag.time.max.ms= \
--override replica.socket.receive.buffer.bytes= \
--override replica.socket.timeout.ms= \
--override request.timeout.ms= \
--override socket.receive.buffer.bytes= \
--override socket.request.max.bytes= \
--override socket.send.buffer.bytes= \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms= \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms= \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries= \
--override controlled.shutdown.retry.backoff.ms= \
--override controller.socket.timeout.ms= \
--override default.replication.factor= \
--override fetch.purgatory.purge.interval.requests= \
--override group.max.session.timeout.ms= \
--override group.min.session.timeout.ms= \
--override inter.broker.protocol.version=0.10.-IV0 \
--override log.cleaner.backoff.ms= \
--override log.cleaner.dedupe.buffer.size= \
--override log.cleaner.delete.retention.ms= \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size= \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms= \
--override log.cleaner.threads= \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes= \
--override log.index.size.max.bytes= \
--override log.message.timestamp.difference.max.ms= \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms= \
--override max.connections.per.ip= \
--override num.partitions= \
--override producer.purgatory.purge.interval.requests= \
--override replica.fetch.backoff.ms= \
--override replica.fetch.max.bytes= \
--override replica.fetch.response.max.bytes= \
--override reserved.broker.max.id= "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx2G -Xms2G"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
tcpSocket:
port:
initialDelaySeconds:
periodSeconds:
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "ceph-rbd-provisioner"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
zookeeper.yaml
https://github.com/kow3ns apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port:
name: server
- port:
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port:
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-hs
replicas:
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "4Gi"
cpu: ""
ports:
- containerPort:
name: client
- containerPort:
name: server
- containerPort:
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers= \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port= \
--election_port= \
--server_port= \
--tick_time= \
--init_limit= \
--sync_limit= \
--heap=3G \
--max_client_cnxns= \
--snap_retain_count= \
--purge_interval= \
--max_session_timeout= \
--min_session_timeout= \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 250Gi
ceph kubernetes中使用的更多相关文章
- 关于 Kubernetes 中的 Volume 与 GlusterFS 分布式存储
容器中持久化的文件生命周期是短暂的,如果容器中程序崩溃宕机,kubelet 就会重新启动,容器中的文件将会丢失,所以对于有状态的应用容器中持久化存储是至关重要的一个环节:另外很多时候一个 Pod 中可 ...
- Kubernetes 中的核心组件与基本对象概述
Kubernetes 是 Google 基于 Borg 开源的容器编排调度,用于管理容器集群自动化部署.扩容以及运维的开源平台.作为云原生计算基金会 CNCF(Cloud Native Computi ...
- Kubernetes 中的pv和pvc
原文地址:http://www.cnblogs.com/leidaxia/p/6485646.html 持久卷 PersistentVolumes 本文描述了 Kubernetes 中的 Persis ...
- Kubernetes中的Volume介绍
Kubernetes中支持的所有磁盘挂载卷简介发表于 2018年1月26日 Weihai Feb 10,2016 7400 字 | 阅读需要 15 分钟 容器磁盘上的文件的生命周期是短暂的,这就使得在 ...
- Kubernetes中分布式存储Rook-Ceph部署快速演练
最近在项目中有涉及到Kubernetes的分布式存储部分的内容,也抽空多了解了一些.项目主要基于Rook-Ceph运行,考虑到Rook-Ceph部署也不那么简单,官方文档的步骤起点也不算低,因此,在整 ...
- Kubernetes中分布式存储Rook-Ceph的使用:一个ASP.NET Core MVC的案例
在<Kubernetes中分布式存储Rook-Ceph部署快速演练>文章中,我快速介绍了Kubernetes中分布式存储Rook-Ceph的部署过程,这里介绍如何在部署于Kubernete ...
- docker对cpu使用及在kubernetes中的应用
docker对CPU的使用 docker对于CPU的可配置的主要几个参数如下: --cpu-shares CPU shares (relative weight) --cpu-period Limit ...
- 【转】干货,Kubernetes中的Source Ip机制。
准备工作 你必须拥有一个正常工作的 Kubernetes 1.5 集群,用来运行本文中的示例.该示例使用一个简单的 nginx webserver 回送它接收到的请求的 HTTP 头中的源 IP 地址 ...
- kubernetes中的Pause容器如何理解?
前几篇文章都是讲的Kubernetes集群和相关组件的部署,但是部署只是入门的第一步,得理解其中的一些知识才行.今天给大家分享下Kubernets的pause容器的作用. Pause容器 全称infr ...
随机推荐
- 乐字节-Java8新特性之方法引用
上一篇小乐介绍了<Java8新特性-函数式接口>,大家可以点击回顾.这篇文章将接着介绍Java8新特性之方法引用. Java8 中引入方法引用新特性,用于简化应用对象方法的调用, 方法引用 ...
- Spring Security(一)
Spring Security(一) 基本原理 前言 Spring Security核心功能 认证(你是谁) 授权(你能干什么) 攻击防护(防止伪造身份) Srping Security基本原理 项目 ...
- Number Sequence(hdu4390)
Number Sequence Time Limit: 10000/3000 MS (Java/Others) Memory Limit: 65536/65536 K (Java/Others)Tot ...
- Docker入门及基本指令
Docker概念 Docker就相当于一个Github账号,不过最开始的工程不能自己建立,要从DockerHub这个中央仓库pull过来,这个工程Docker称之为image,这个image竟然是个l ...
- HDU4283(KB22-G)
You Are the One Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)T ...
- CSS3多媒体查询
CSS2多媒体查询: @media规则在css2中有介绍,针对不同媒体类型(包括显示器,便携设备,电视机,等等)可以定制不同的样式规则. CSS3多媒体查询: CSS3多媒体查询继承了CSS2多媒体类 ...
- 【读书笔记】iOS-网络-使用推送通知
一,本地通知 本地通知有64位的最大限制.虽然,你依然可以调度通知,不过到到达的通知数被限定为接近64个,并且按照fireDate的顺序排序,系统会忽略掉其余的通知.这意味着如果现在有64个调用的本地 ...
- 【读书笔记】iOS-网络-同步请求,队列式异步请求,异步请求的区别
一,同步请求的最佳实践. 1,只在后台过程中使用同步请求,除非确定访问的是本地文件资源,否则请不要在主线程上使用. 2,只有在知道返回的数据不会超出应用的内存时才使用同步请求.记住,整个响应体都会位于 ...
- cmd--命令短集
查看ip地址:ipconfig 查看ip地址:ipconfig/all 进入c盘program files目录下:cd %Program Files%,”x:“, 进入x盘根目录.cd “ ”进入某文 ...
- 排错-tcpreplay回放错误:send() [218] Message too long (errno = 90)
排错-tcpreplay回放错误:send() [218] Message too long (errno = 90) by:授客 QQ:1033553122 问题描述: tcpreplay回放.pc ...