ceph kubernetes中使用
1.在管理节点上,进入刚创建的放置配置文件的目录,用 ceph-deploy 执行如下步骤
mkdir /opt/cluster-ceph cd /opt/cluster-ceph ceph-deploy new master1 master2 master3
2.添加epel源
yum install -y yum-utils && yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && yum install --nogpgcheck -y epel-release && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && rm -f /etc/yum.repos.d/dl.fedoraproject.org*
直接进行ceph的安装,会报如下的错误:
--> Finished Dependency Resolution
Error: Package: :ceph-common-10.2.-.el7.x86_64 (ceph)
Requires: libbabeltrace-ctf.so.()(64bit)
Error: Package: :ceph-osd-10.2.-.el7.x86_64 (ceph)
Requires: libleveldb.so.()(64bit)
Error: Package: :ceph-mon-10.2.-.el7.x86_64 (ceph)
Requires: libleveldb.so.()(64bit)
Error: Package: :librbd1-10.2.-.el7.x86_64 (ceph)
Requires: liblttng-ust.so.()(64bit)
Error: Package: :ceph-base-10.2.-.el7.x86_64 (ceph)
Requires: liblttng-ust.so.()(64bit)
Error: Package: :librgw2-10.2.-.el7.x86_64 (ceph)
Requires: libfcgi.so.()(64bit)
Error: Package: :ceph-common-10.2.-.el7.x86_64 (ceph)
Requires: libbabeltrace.so.()(64bit)
Error: Package: :librados2-10.2.-.el7.x86_64 (ceph)
Requires: liblttng-ust.so.()(64bit)
2. 安装 Ceph
[root@localhost ~]# yum install --downloadonly --downloaddir=/tmp/ceph ceph 在每台主机上安装ceph
[root@localhost ~]# yum localinstall -C -y --disablerepo=* /tmp/ceph/*.rpm
配置初始 monitor(s)、并收集所有密钥
# 请务必在 ceph-cluster 目录下 [root@admin ceph-cluster]# ceph-deploy mon create-initial
初始化 ceph.osd 节点
创建存储空间
[root@osd1 ~]# mkdir -p /data/ceph-osd
[root@osd1 ~]# chown ceph.ceph /data/ceph-osd/ -R [root@osd2 ~]# mkdir -p /data/ceph-osd
[root@osd2 ~]# chown ceph.ceph /data/ceph-osd/ -R
[root@osd3 ~]# mkdir -p /data/ceph-osd
[root@osd3 ~]# chown ceph.ceph /data/ceph-osd/ -R
[root@osd4 ~]# mkdir -p /data/ceph-osd
[root@osd4 ~]# chown ceph.ceph /data/ceph-osd/ -R
创建OSD:
[root@admin ceph-cluster]# ceph-deploy osd prepare node1:/data/ceph-osd node2:/data/ceph-osd node3:/data/ceph-osd node4:/data/ceph-osd 激活 OSD
[root@admin ceph-cluster]# ceph-deploy osd activate node1:/data/ceph-osd node2:/data/ceph-osd node3:/data/ceph-osd node4:/data/ceph-osd
用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了 [root@admin ceph-cluster]# ceph-deploy admin master1 master2 master3 node1 node2 node3 node4 确保你对 ceph.client.admin.keyring 有正确的操作权限。 chmod +r /etc/ceph/ceph.client.admin.keyring (所有机器) 如果配置文件更改,需要同步配置文件到所有节点 [root@admin ceph-cluster]# ceph-deploy --overwrite-conf admin master1 master2 master3 node1 node2 node3 node4
官网的zookeeper yaml,去掉了亲和性
apiVersion: v1
kind: Service
metadata:
namespace: testsubject
name: zk-hs
labels:
app: zk
spec:
ports:
- port:
name: server
- port:
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
namespace: testsubject
name: zk-cs
labels:
app: zk
spec:
ports:
- port:
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
namespace: testsubject
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable:
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
namespace: testsubject
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas:
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk
spec:
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "192.168.200.10/senyint/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "4Gi"
cpu: ""
ports:
- containerPort:
name: client
- containerPort:
name: server
- containerPort:
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers= \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port= \
--election_port= \
--server_port= \
--tick_time= \
--init_limit= \
--sync_limit= \
--heap=512M \
--max_client_cnxns= \
--snap_retain_count= \
--purge_interval= \
--max_session_timeout= \
--min_session_timeout= \
--log_level=OFF"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
volumeMounts:
- name: datazk
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datazk
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: ceph-rbd-database
resources:
requests:
storage: 20Gi
ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin-testsubject
namespace: testsubject
type: "kubernetes.io/rbd"
data:
key: QVFERkcvQmF5ckFkSnhBQVVkM2VCdC82K3dOTnZIM3V0ZHpnTnc9PQo=
rbd-storage-data-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-database
namespace: testsubject
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.200.11:,192.168.200.12:,192.168.200.13:
adminId: admin
adminSecretName: ceph-secret-admin-testsubject
adminSecretNamespace: "testsubject"
pool: fengjian
userId: admin
userSecretName: ceph-secret-admin-testsubject
imageFormat: ""
imageFeatures: "layering"
不建立pv, 直接 使用storageclass,然后建立pvc, deployment 指定 claimName
ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
key: QVFERkcvQmF5ckFkSnhBQVVkM2VCdC82K3dOTnZIM3V0ZHpnTnc9PQo=
rbd-storage-data-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-provisioner
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.200.11:6789,192.168.200.12:6789,192.168.200.13:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: default
pool: fengjian
userId: admin
userSecretName: ceph-secret-admin
imageFormat: "2"
imageFeatures: "layering"
redis.yaml
apiVersion: apps/v1 # for versions before 1.9. use apps/v1beta2
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas:
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: 192.168.200.10/redis/redis:master
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort:
volumeMounts:
- name: datadir
mountPath: /data
volumes:
- name: datadir
persistentVolumeClaim:
claimName: redis-master-rbd-pvc --- apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port:
targetPort:
selector:
app: redis
role: master
tier: backend
kafka配置文件
参考 : https://kow3ns.github.io/kubernetes-kafka/manifests/
[root@master1 ceph_rbd]# cat kafka.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-hs
labels:
app: kafka
spec:
ports:
- port:
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
maxUnavailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-hs
replicas:
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds:
containers:
- name: k8skafka
imagePullPolicy: Always
image: 192.168.200.10/source/kubernetes-kafka:1.0-10.2.
resources:
requests:
memory: "12Gi"
cpu:
ports:
- containerPort:
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9093 \
--override zookeeper.connect=zk-cs.default.svc.cluster.local: \
--override log.dir=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads= \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds= \
--override leader.imbalance.per.broker.percentage= \
--override log.flush.interval.messages= \
--override log.flush.offset.checkpoint.interval.ms= \
--override log.flush.scheduler.interval.ms= \
--override log.retention.bytes=- \
--override log.retention.hours= \
--override log.roll.hours= \
--override log.roll.jitter.hours= \
--override log.segment.bytes= \
--override log.segment.delete.delay.ms= \
--override message.max.bytes= \
--override min.insync.replicas= \
--override num.io.threads= \
--override num.network.threads= \
--override num.recovery.threads.per.data.dir= \
--override num.replica.fetchers= \
--override offset.metadata.max.bytes= \
--override offsets.commit.required.acks=- \
--override offsets.commit.timeout.ms= \
--override offsets.load.buffer.size= \
--override offsets.retention.check.interval.ms= \
--override offsets.retention.minutes= \
--override offsets.topic.compression.codec= \
--override offsets.topic.num.partitions= \
--override offsets.topic.replication.factor= \
--override offsets.topic.segment.bytes= \
--override queued.max.requests= \
--override quota.consumer.default= \
--override quota.producer.default= \
--override replica.fetch.min.bytes= \
--override replica.fetch.wait.max.ms= \
--override replica.high.watermark.checkpoint.interval.ms= \
--override replica.lag.time.max.ms= \
--override replica.socket.receive.buffer.bytes= \
--override replica.socket.timeout.ms= \
--override request.timeout.ms= \
--override socket.receive.buffer.bytes= \
--override socket.request.max.bytes= \
--override socket.send.buffer.bytes= \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms= \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms= \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries= \
--override controlled.shutdown.retry.backoff.ms= \
--override controller.socket.timeout.ms= \
--override default.replication.factor= \
--override fetch.purgatory.purge.interval.requests= \
--override group.max.session.timeout.ms= \
--override group.min.session.timeout.ms= \
--override inter.broker.protocol.version=0.10.-IV0 \
--override log.cleaner.backoff.ms= \
--override log.cleaner.dedupe.buffer.size= \
--override log.cleaner.delete.retention.ms= \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size= \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms= \
--override log.cleaner.threads= \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes= \
--override log.index.size.max.bytes= \
--override log.message.timestamp.difference.max.ms= \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms= \
--override max.connections.per.ip= \
--override num.partitions= \
--override producer.purgatory.purge.interval.requests= \
--override replica.fetch.backoff.ms= \
--override replica.fetch.max.bytes= \
--override replica.fetch.response.max.bytes= \
--override reserved.broker.max.id= "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx2G -Xms2G"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
tcpSocket:
port:
initialDelaySeconds:
periodSeconds:
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "ceph-rbd-provisioner"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
zookeeper.yaml
https://github.com/kow3ns apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port:
name: server
- port:
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port:
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-hs
replicas:
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "4Gi"
cpu: ""
ports:
- containerPort:
name: client
- containerPort:
name: server
- containerPort:
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers= \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port= \
--election_port= \
--server_port= \
--tick_time= \
--init_limit= \
--sync_limit= \
--heap=3G \
--max_client_cnxns= \
--snap_retain_count= \
--purge_interval= \
--max_session_timeout= \
--min_session_timeout= \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds:
timeoutSeconds:
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser:
fsGroup:
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 250Gi
ceph kubernetes中使用的更多相关文章
- 关于 Kubernetes 中的 Volume 与 GlusterFS 分布式存储
容器中持久化的文件生命周期是短暂的,如果容器中程序崩溃宕机,kubelet 就会重新启动,容器中的文件将会丢失,所以对于有状态的应用容器中持久化存储是至关重要的一个环节:另外很多时候一个 Pod 中可 ...
- Kubernetes 中的核心组件与基本对象概述
Kubernetes 是 Google 基于 Borg 开源的容器编排调度,用于管理容器集群自动化部署.扩容以及运维的开源平台.作为云原生计算基金会 CNCF(Cloud Native Computi ...
- Kubernetes 中的pv和pvc
原文地址:http://www.cnblogs.com/leidaxia/p/6485646.html 持久卷 PersistentVolumes 本文描述了 Kubernetes 中的 Persis ...
- Kubernetes中的Volume介绍
Kubernetes中支持的所有磁盘挂载卷简介发表于 2018年1月26日 Weihai Feb 10,2016 7400 字 | 阅读需要 15 分钟 容器磁盘上的文件的生命周期是短暂的,这就使得在 ...
- Kubernetes中分布式存储Rook-Ceph部署快速演练
最近在项目中有涉及到Kubernetes的分布式存储部分的内容,也抽空多了解了一些.项目主要基于Rook-Ceph运行,考虑到Rook-Ceph部署也不那么简单,官方文档的步骤起点也不算低,因此,在整 ...
- Kubernetes中分布式存储Rook-Ceph的使用:一个ASP.NET Core MVC的案例
在<Kubernetes中分布式存储Rook-Ceph部署快速演练>文章中,我快速介绍了Kubernetes中分布式存储Rook-Ceph的部署过程,这里介绍如何在部署于Kubernete ...
- docker对cpu使用及在kubernetes中的应用
docker对CPU的使用 docker对于CPU的可配置的主要几个参数如下: --cpu-shares CPU shares (relative weight) --cpu-period Limit ...
- 【转】干货,Kubernetes中的Source Ip机制。
准备工作 你必须拥有一个正常工作的 Kubernetes 1.5 集群,用来运行本文中的示例.该示例使用一个简单的 nginx webserver 回送它接收到的请求的 HTTP 头中的源 IP 地址 ...
- kubernetes中的Pause容器如何理解?
前几篇文章都是讲的Kubernetes集群和相关组件的部署,但是部署只是入门的第一步,得理解其中的一些知识才行.今天给大家分享下Kubernets的pause容器的作用. Pause容器 全称infr ...
随机推荐
- 了解java虚拟机—串行回收器(6)
串行回收器 串行回收器只有一个工作线程,串行回收器可以在新生代和老年代使用,根据作用于不同的堆和空间,分为新生代串行回收器和老年代串行回收器. 1.新生代串行回收器 串行收集器是所有垃圾回收器中最古老 ...
- 常系数线性递推的第n项及前n项和 (Fibonacci数列,矩阵)
(一)Fibonacci数列f[n]=f[n-1]+f[n-2],f[1]=f[2]=1的第n项的快速求法(不考虑高精度). 解法: 考虑1×2的矩阵[f[n-2],f[n-1]].根据fibon ...
- 解决访问 jar 包里面的字体报错:OTS parsing error: incorrect file size in WOFF header
前言:jar 包里面的字体加载,浏览器 console 中报警告信息,这里记录一下解决方案. 附:自己的一个 jar 包源码 https://github.com/yuleGH/querydb 错误问 ...
- python中强大优雅的列表推导表达式
推导表达式其实就是简化一些循环判断操作等 生成一个数字1-10的列表,可以有多少种方法? >>> l = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ] > ...
- Code Signal_练习题_evenDigitsOnly
Check if all digits of the given integer are even. Example For n = 248622, the output should beevenD ...
- python学习之老男孩python全栈第九期_day011知识点总结
# 装饰器的形成的过程:最简单的装饰器:有返回值的:有一个参数的:万能参数# 装饰器的作用# 原则:开放封闭原则# 语法糖# 装饰器的固定模式:# def wrapper(f): # 装饰器函数,f是 ...
- 【读书笔记】iOS-网络-三种错误
一,操作系统错误. iOS人机界面指南中,Apple建议不要过度使用AlertViews,因为这会破坏设备的使用感受. 操作系统错误: 1,没有网络. 2,无法路由到目标主机. 3,没用应和监听目标端 ...
- php递归获取无限分类菜单
从数据库获取所有菜单信息,需要根据id,pid字段获取主菜单及其子菜单,以及子菜单下的子菜单,可以通过函数递归来实现. <?php class Menu { public $menu = arr ...
- Angular调用父Scope的函数
app.directive('toggle', function(){ return { restrict: 'A', template: '<a ng-click="f()" ...
- 网站与phpwind用户同步的方法
搭建了一个个人网站,希望使用phpwind来完成论坛功能.但很快就发现存在用户同步的问题,我的网站已经有了用户管理功能, phpwind论坛也有.因此用户同步注册,登陆和注销是必须要实现的. 网上说可 ...