基于ceph rbd 在kubernetes harbor 空间下创建动态存储
[root@bs-k8s-ceph ~]# ceph osd pool create harbor 128
Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds)
//这个问题 我不知道怎么解决 因为过了一小会 就又好了
[root@bs-k8s-ceph ~]# ceph osd pool create harbor 128
pool 'harbor' created
[root@bs-k8s-ceph ceph]# ceph auth get-or-create client.harbor mon 'allow r' osd 'allow class-read, allow rwx pool=harbor' -o ceph.client.harbor.keyring
[root@bs-k8s-ceph ceph]# ceph auth get client.harbor
exported keyring for client.harbor
[client.harbor]
key = AQDoCklen6e4NxAAVXmy/PG+R5iH8fNzMhk6Jg==
caps mon = "allow r"
caps osd = "allow class-read, allow rwx pool=harbor" [root@bs-k8s-node01 ~]# ceph auth get-key client.admin | base64
QVFDNmNVSmV2eU8yRnhBQVBxYzE5Mm5PelNnZk5acmg5aEFQYXc9PQ==
[root@bs-k8s-node01 ~]# ceph auth get-key client.harbor | base64 [root@bs-k8s-master01 ~]# kubectl get nodes
The connection to the server 20.0.0.250:8443 was refused - did you specify the right host or port?
[root@bs-hk-hk01 ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since 日 2020-02-16 17:16:43 CST; 12min ago
Process: 1168 ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid (code=exited, status=134)
Main PID: 1168 (code=exited, status=134) 2月 15 20:22:54 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202254 (1184) : Server k8s_api_nodes...ue.
2月 15 20:25:15 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202515 (1183) : Server k8s_api_nodes...ue.
2月 15 20:25:15 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202515 (1184) : Server k8s_api_nodes...ue.
2月 15 20:26:03 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202603 (1184) : Server k8s_api_nodes...ue.
2月 15 20:26:03 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202603 (1183) : Server k8s_api_nodes...ue.
2月 15 20:26:13 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202613 (1183) : Server k8s_api_nodes...ue.
2月 15 20:26:13 bs-hk-hk01 haproxy[1168]: [WARNING] 045/202613 (1184) : Server k8s_api_nodes...ue.
2月 16 17:16:43 bs-hk-hk01 systemd[1]: haproxy.service: main process exited, code=exited, st...n/a
2月 16 17:16:44 bs-hk-hk01 systemd[1]: Unit haproxy.service entered failed state.
2月 16 17:16:44 bs-hk-hk01 systemd[1]: haproxy.service failed.
Hint: Some lines were ellipsized, use -l to show in full. [root@bs-hk-hk01 ~]# systemctl start haproxy
[root@bs-hk-hk01 ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since 日 2020-02-16 17:30:03 CST; 1s ago
Process: 4196 ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q (code=exited, status=0/SUCCESS)
Main PID: 4212 (haproxy)
CGroup: /system.slice/haproxy.service
├─4212 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy....
├─4216 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy....
└─4217 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.... 2月 16 17:30:00 bs-hk-hk01 systemd[1]: Starting HAProxy Load Balancer...
2月 16 17:30:03 bs-hk-hk01 systemd[1]: Started HAProxy Load Balancer.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [WARNING] 046/173004 (4212) : config : 'option for...de.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [WARNING] 046/173004 (4212) : config : 'option for...de.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [WARNING] 046/173004 (4212) : Proxy 'stats': in mu...st.
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [NOTICE] 046/173004 (4212) : New worker #1 (4216) forked
2月 16 17:30:04 bs-hk-hk01 haproxy[4212]: [NOTICE] 046/173004 (4212) : New worker #2 (4217) forked
Hint: Some lines were ellipsized, use -l to show in full.
[root@bs-hk-hk01 ~]# systemctl enable haproxy [root@bs-k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
bs-k8s-master01 Ready master 7d6h v1.17.2
bs-k8s-master02 Ready master 7d6h v1.17.2
bs-k8s-master03 Ready master 7d6h v1.17.2
bs-k8s-node01 Ready <none> 7d6h v1.17.2
bs-k8s-node02 Ready <none> 7d6h v1.17.2
bs-k8s-node03 Ready <none> 7d6h v1.17.2
[root@bs-k8s-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default rbd-provisioner-75b85f85bd-8ftdm 1/1 Running 11 7d6h
kube-system calico-node-4jxbp 1/1 Running 4 7d6h
kube-system calico-node-7t9cj 1/1 Running 7 7d6h
kube-system calico-node-cchgl 1/1 Running 14 7d6h
kube-system calico-node-czj76 1/1 Running 6 7d6h
kube-system calico-node-lxb2s 1/1 Running 14 7d6h
kube-system calico-node-nmg9t 1/1 Running 8 7d6h
kube-system coredns-7f9c544f75-bwx9p 1/1 Running 4 7d6h
kube-system coredns-7f9c544f75-q58mr 1/1 Running 3 7d6h
kube-system dashboard-metrics-scraper-6b66849c9-qtwzx 1/1 Running 2 7d5h
kube-system etcd-bs-k8s-master01 1/1 Running 17 7d6h
kube-system etcd-bs-k8s-master02 1/1 Running 7 7d6h
kube-system etcd-bs-k8s-master03 1/1 Running 32 7d6h
kube-system kube-apiserver-bs-k8s-master01 1/1 Running 28 7d6h
kube-system kube-apiserver-bs-k8s-master02 1/1 Running 15 7d6h
kube-system kube-apiserver-bs-k8s-master03 1/1 Running 62 7d6h
kube-system kube-controller-manager-bs-k8s-master01 1/1 Running 32 7d6h
kube-system kube-controller-manager-bs-k8s-master02 1/1 Running 27 7d6h
kube-system kube-controller-manager-bs-k8s-master03 1/1 Running 31 7d6h
kube-system kube-proxy-26ffm 1/1 Running 3 7d6h
kube-system kube-proxy-298tr 1/1 Running 5 7d6h
kube-system kube-proxy-hzsmb 1/1 Running 3 7d6h
kube-system kube-proxy-jb4sq 1/1 Running 4 7d6h
kube-system kube-proxy-pt94r 1/1 Running 4 7d6h
kube-system kube-proxy-wljwv 1/1 Running 4 7d6h
kube-system kube-scheduler-bs-k8s-master01 1/1 Running 32 7d6h
kube-system kube-scheduler-bs-k8s-master02 1/1 Running 21 7d6h
kube-system kube-scheduler-bs-k8s-master03 1/1 Running 31 7d6h
kube-system kubernetes-dashboard-887cbd9c6-j7ptq 1/1 Running 22 7d5h
[root@bs-k8s-master01 harbor]# pwd
/data/k8s/harbor
[root@bs-k8s-master01 rbd]# kubectl apply -f ceph-harbor-namespace.yaml
namespace/harbor created
[root@bs-k8s-master01 rbd]# kubectl get namespaces
NAME STATUS AGE
default Active 7d8h
harbor Active 16s
kube-node-lease Active 7d8h
kube-public Active 7d8h
kube-system Active 7d8h
[root@bs-k8s-master01 rbd]# cat ceph-harbor-namespace.yaml
##########################################################################
#Author: zisefeizhu
#QQ: 2********0
#Date: 2020-02-16
#FileName: ceph-harbor-namespace.yaml
#URL: https://www.cnblogs.com/zisefeizhu/
#Description: The test script
#Copyright (C): 2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Namespace
metadata:
name: harbor
[root@bs-k8s-master01 rbd]# kubectl apply -f external-storage-rbd-provisioner.yaml
serviceaccount/rbd-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner configured
role.rbac.authorization.k8s.io/rbd-provisioner created
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created
deployment.apps/rbd-provisioner created
[root@bs-k8s-master01 rbd]# kubectl get pods -n harbor -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rbd-provisioner-75b85f85bd-dhnr4 1/1 Running 0 3m48s 10.209.46.84 bs-k8s-node01 <none> <none>
[root@bs-k8s-master01 rbd]# cat external-storage-rbd-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
namespace: harbor
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns"]
verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: harbor
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io ---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: harbor
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
namespace: harbor
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: harbor ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: harbor
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
[root@bs-k8s-master01 harbor]# kubectl apply -f ceph-harbor-secret.yaml
secret/ceph-harbor-admin-secret created
secret/ceph-harbor-harbor-secret created
[root@bs-k8s-master01 harbor]# kubectl get secret -n harbor
NAME TYPE DATA AGE
ceph-harbor-admin-secret kubernetes.io/rbd 1 23s
ceph-harbor-harbor-secret kubernetes.io/rbd 1 23s
default-token-8k9gs kubernetes.io/service-account-token 3 8m49s
rbd-provisioner-token-mhl29 kubernetes.io/service-account-token 3 5m24s
[root@bs-k8s-master01 harbor]# cat ceph-harbor-secret.yaml
##########################################################################
#Author: zisefeizhu
#QQ: 2********0
#Date: 2020-02-16
#FileName: ceph-harbor-secret.yaml
#URL: https://www.cnblogs.com/zisefeizhu/
#Description: The test script
#Copyright (C): 2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Secret
metadata:
name: ceph-harbor-admin-secret
namespace: harbor
data:
key: QVFDNmNVSmV2eU8yRnhBQVBxYzE5Mm5PelNnZk5acmg5aEFQYXc9PQ==
type: kubernetes.io/rbd
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-harbor-harbor-secret
namespace: harbor
data:
key: QVFEb0NrbGVuNmU0TnhBQVZYbXkvUEcrUjVpSDhmTnpNaGs2Smc9PQ==
type: kubernetes.io/rbd
[root@bs-k8s-master01 harbor]# kubectl apply -f ceph-harbor-storageclass.yaml
storageclass.storage.k8s.io/ceph-harbor created
[root@bs-k8s-master01 harbor]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-harbor ceph.com/rbd Retain Immediate false 11s
ceph-rbd ceph.com/rbd Retain Immediate false 25h
[root@bs-k8s-master01 harbor]# cat ceph-harbor-storageclass.yaml
##########################################################################
#Author: zisefeizhu
#QQ: 2********0
#Date: 2020-02-16
#FileName: ceph-harbor-storageclass.yaml
#URL: https://www.cnblogs.com/zisefeizhu/
#Description: The test script
#Copyright (C): 2020 All rights reserved
###########################################################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-harbor
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Retain
parameters:
monitors: 20.0.0.206:6789,20.0.0.207:6789,20.0.0.208:6789
adminId: admin
adminSecretName: ceph-harbor-admin-secret
adminSecretNamespace: harbor
pool: harbor
fsType: xfs
userId: harbor
userSecretName: ceph-harbor-harbor-secret
imageFormat: "2"
imageFeatures: "layering"
[root@bs-k8s-master01 harbor]# kubectl apply -f ceph-harbor-pvc.yaml
persistentvolumeclaim/pvc-ceph-harbor created
wp-pv-claim Bound pvc-494a130d-018c-4be3-9b31-e951cc4367a5 20Gi RWO ceph-rbd 23h
[root@bs-k8s-master01 harbor]# kubectl get pv -n harbor
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-494a130d-018c-4be3-9b31-e951cc4367a5 20Gi RWO Retain Bound default/wp-pv-claim ceph-rbd 23h
pvc-4df6a301-c9f3-4694-8271-d1d0184c00aa 1Gi RWO Retain Bound harbor/pvc-ceph-harbor ceph-harbor 6s
pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16 1Gi RWO Retain Bound default/ceph-pvc ceph-rbd 26h
pvc-ac7d3a09-123e-4614-886c-cded8822a078 20Gi RWO Retain Bound default/mysql-pv-claim ceph-rbd 23h
[root@bs-k8s-master01 harbor]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ceph-pvc Bound pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16 1Gi RWO ceph-rbd 26h
mysql-pv-claim Bound pvc-ac7d3a09-123e-4614-886c-cded8822a078 20Gi RWO ceph-rbd 23h
wp-pv-claim Bound pvc-494a130d-018c-4be3-9b31-e951cc4367a5 20Gi RWO ceph-rbd 23h
[root@bs-k8s-master01 harbor]# kubectl get pvc -n harbor
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-ceph-harbor Bound pvc-4df6a301-c9f3-4694-8271-d1d0184c00aa 1Gi RWO ceph-harbor 24s
[root@bs-k8s-master01 harbor]# cat ceph-harbor-pvc.yaml
##########################################################################
#Author: zisefeizhu
#QQ: 2********0
#Date: 2020-02-16
#FileName: ceph-harbor-pvc.yaml
#URL: https://www.cnblogs.com/zisefeizhu/
#Description: The test script
#Copyright (C): 2020 All rights reserved
###########################################################################
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ceph-harbor
namespace: harbor
spec:
storageClassName: ceph-harbor
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi //到此 完成了在harbor 名称空间下创建动态pv
基于ceph rbd 在kubernetes harbor 空间下创建动态存储的更多相关文章
- Linux下创建动态库与使用
参考文章:dll和so文件区别与构成:http://www.cnblogs.com/likwo/archive/2012/05/09/2492225.html 动态库路径配置- /etc/ld.so. ...
- vbox下创建共享存储
1.创建共享盘VBoxManage.exe createhd -filename D:\VM\linux01\ocr_vote.vdi -size 2048 -format VDI -variant ...
- CentOS7 下安装 iSCSI Target(tgt) ,使用 Ceph rbd
目录 一.iSCSI 介绍 1. iSCSI 定义 2. 几种常见的 iSCSI Target 3. 优缺点比较 二.安装步骤 1. 关闭防火墙 2. 关闭selinux 3. 通过 yum 安装 t ...
- kubernetes挂载ceph rbd和cephfs的方法
目录 k8s挂载Ceph RBD PV & PVC方式 创建secret 创建PV 创建PVC 创建deployment挂载PVC StorageClass方式 创建secret 创建Stor ...
- Kubernetes配置Ceph RBD StorageClass
1. 在Ceph上为Kubernetes创建一个存储池 # ceph osd pool create k8s 2. 创建k8s用户 # ceph auth get-or-create client.k ...
- helm3.1安装及结合ceph rbd 部署harbor
[root@bs-k8s-ceph ~]# ceph -s cluster: id: 11880418-1a9a-4b55-a353-4b141e2199d8 health: HEALTH_WARN ...
- 某空间下的令牌访问产生过程--Kubernetes Dashboard(k8s-Dashboard)
在面试中发现,有些运维人员基本的令牌访问方式都不知道,下面介绍下令牌的产生过程 某个空间下的令牌访问产生过程(空间名称为cc) ###创建命名空间[root@vms61 ccadmin]# kubec ...
- 理解 QEMU/KVM 和 Ceph(1):QEMU-KVM 和 Ceph RBD 的 缓存机制总结
本系列文章会总结 QEMU/KVM 和 Ceph 之间的整合: (1)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (2)QEMU 的 RBD 块驱动(block driver) (3)存 ...
- SUSE Ceph RBD Mirror - Storage 6
Ceph采用的是强一致性同步模型,所有副本都必须完成写操作才算一次写入成功,这就导致不能很好地支持跨域部署,因为如果副本在异地,网络延迟就会很大,拖垮整个集群的写性能.因此,Ceph集群很少有跨域部署 ...
随机推荐
- 3D 室内装修线设计软件
3D 室内装修线设计软件 WebGL & canvas https://threejs.org/ xgqfrms 2012-2020 www.cnblogs.com 发布文章使用:只允许注册用 ...
- 密码 & 安全
密码 & 安全 拖库/脱库 如何在数据库中存储密码更安全? https://time.geekbang.org/dailylesson/detail/100044031 拖库和撞库 https ...
- ip & 0.0.0.0 & 127.0.0.1 & localhost
ip & 0.0.0.0 & 127.0.0.1 7 localhost host https://www.howtogeek.com/225487/what-is-the-diffe ...
- 蓝桥杯——试题 算法训练 Yaroslav and Algorithm
试题 算法训练 Yaroslav and Algorithm 资源限制 时间限制:100ms 内存限制:128.0MB 问题描述 (这道题的数据和SPJ已完工,尽情来虐吧!) Yaroslav喜欢算法 ...
- 解决windwos系统80端口被暂用无法发布(NGINX、TOMCAT、IIS)
原因: 一个操作系统有0-65535个端口,但是一个端口只能被一个应用程序使用.所以80端口只有一个,当开发发布时想用应用NGINX,TOMCAT,IIS发布时,如果有程序占用了,就无法使用了. 解决 ...
- idea快捷键:查找类中所有方法的快捷键
查找类中所有方法的快捷键 第一种:ctal+f12,如下图 第二种:alt+7,如下图
- 微信小程序:单选框radio和复选框CheckBox
单选框radio: 可以通过color属性来修改颜色. 复选框checkbox:
- 阿里云CentOS8.0服务器配置Django3.0+Python 3.7 环境
---恢复内容开始--- 1. 下载并安装python # 安装Python3.7.6 wget https://www.python.org/ftp/python/3.7.6/Python-3.7. ...
- Kubernetes-7.Ingress
docker version:20.10.2 kubernetes version:1.20.1 本文概述Kubernetes Ingress基本原理和官方维护的Nginx-Ingress的基本安装使 ...
- AOP面试造火箭始末
本文已整理致我的github地址,欢迎大家 star 支持一下 这是一个困扰我司由来已久的难题,Dubbo 了解过吧,对外提供的服务可能有多个方法,一般我们为了不给调用方埋坑,会在每个方法里把所有异常 ...