1. 在node上安装Gluster客户端(Heketi要求GlusterFS集群至少有三个节点)
删除master标签
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl describe node k8s查看taint是否为空
查看kube-apiserver是否以特权模式运行:
ps -ef | grep kube | grep allow
给每个node打上标签:
kubectl label node k8s storagenode=glusterfs
kubectl label node k8s-node1 storagenode=glusterfs
kubectl label node k8s-node2 storagenode=glusterfs 2. 确保每个node上运行一个GlusterFS管理服务
cat glusterfs.yaml
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: glusterfs
labels:
glusterfs: daemonsett
annotations:
description: GlusterFS DaemonSet
tags: glusterfs
spec:
template:
metadata:
name: glusterfs
labels:
glusterfs-node: pod
spec:
nodeSelector:
storagenode: glusterfs
hostNetwork: true
containers:
- image: gluster/gluster-centos:latest
name: glusterfs
volumeMounts:
- name: glusterfs-heketi
mountPath: "/var/lib/heketi"
- name: glusterfs-run
mountPath: "/run"
- name: glusterfs-lvm
mountPath: "/run/lvm"
- name: glusterfs-etc
mountPath: "/etc/glusterfs"
- name: glusterfs-logs
mountPath: "/var/log/glusterfs"
- name: glusterfs-config
mountPath: "/var/lib/glusterd"
- name: glusterfs-dev
mountPath: "/dev"
- name: glusterfs-misc
mountPath: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
mountPath: "/sys/fs/cgroup"
readOnly: true
- name: glusterfs-ssl
mountPath: "/etc/ssl"
readOnly: true
securityContext:
capabilities: {}
privileged: true
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
volumes:
- name: glusterfs-heketi
hostPath:
path: "/var/lib/heketi"
- name: glusterfs-run
- name: glusterfs-lvm
hostPath:
path: "/run/lvm"
- name: glusterfs-etc
hostPath:
path: "/etc/glusterfs"
- name: glusterfs-logs
hostPath:
path: "/var/log/glusterfs"
- name: glusterfs-config
hostPath:
path: "/var/lib/glusterd"
- name: glusterfs-dev
hostPath:
path: "/dev"
- name: glusterfs-misc
hostPath:
path: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
hostPath:
path: "/sys/fs/cgroup"
- name: glusterfs-ssl
hostPath:
path: "/etc/ssl"
kubectl create -f glusterfs.yaml && kubectl describe pods <pod_name>

            

2. 创建Heketi服务
创建一个ServiceAccount对象
cat heketi-service.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heketi-service-account
kubectl create -f heketi-service.yaml
部署heketi服务:
cat heketi-svc.yaml
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-deployment
deploy-heketi: heket-deployment
annotations:
description: Defines how to deploy Heketi
spec:
replicas: 1
template:
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-pod
name: deploy-heketi
spec:
serviceAccountName: heketi-service-account
containers:
- image: heketi/heketi
imagePullPolicy: IfNotPresent
name: deploy-heketi
env:
- name: HEKETI_EXECUTOR
value: kubernetes
- name: HEKETI_FSTAB
value: "/var/lib/heketi/fstab"
- name: HEKETI_SNAPSHOT_LIMIT
value: '14'
- name: HEKETI_KUBE_GLUSTER_DAEMONSET
value: "y"
ports:
- containerPort: 8080
volumeMounts:
- name: db
mountPath: "/var/lib/heketi"
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 3
httpGet:
path: "/hello"
port: 8080
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 30
httpGet:
path: "/hello"
port: 8080
volumes:
- name: db
hostPath:
path: "/heketi-data" ---
kind: Service
apiVersion: v1
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-service
deploy-heketi: support
annotations:
description: Exposes Heketi Service
spec:
selector:
name: deploy-heketi
ports:
- name: deploy-heketi
port: 8080
targetPort: 8080
kubectl create -f heketi-service.yaml && kubectl get svc && kubectl get deploument
   kubectl describe pod deploy-heketi查看运行在哪个node

            

3. Heketi安装
  yum install -y centos-release-gluster
  yum install -y heketi heketi-client
cat topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"k8s"
],
"storage": [
"192.168.66.86"
]
},
"zone": 1
},
"devices": [
"/dev/vdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node1"
],
"storage": [
"192.168.66.87"
]
},
"zone": 1
},
"devices": [
"/dev/vdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node2"
],
"storage": [
"192.168.66.84"
]
},
"zone": 1
},
"devices": [
"/dev/vdb"
]
}
]
}
]
}   HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep deploy-heketi | awk '{print $1}')
 kubectl port-forward $HEKETI_BOOTSTRAP_POD 8080:8080 &后台启动
  export HEKETI_CLI_SERVER=http://localhost:8080
  heketi-cli topology load --json=topology.json
  heketi-cli topology info

            

            

4. 报错处理
4.1:执行heketi-cli topology load --json=topology.json报错如下
Creating cluster ... ID: 76576f2209ccd75a0ab1e44fc38fd393
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s ... Unable to create node: New Node doesn't have glusterd running
Creating node k8s-node1 ... Unable to create node: New Node doesn't have glusterd running
Creating node k8s-node2 ... Unable to create node: New Node doesn't have glusterd running
解决:kubectl create clusterrole fao --verb=get,list,watch,create --resource=pods,pods/status,pods/exec如还报错
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account 4.2:执行heketi-cli topology load --json=topology.json报错如下
Found node k8s on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem?
Can't open /dev/vdb exclusively. Mounted filesystem?
Found node k8s-node1 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem?
Can't open /dev/vdb exclusively. Mounted filesystem?
Found node k8s-node2 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem?
Can't open /dev/vdb exclusively. Mounted filesystem?
解决:格式化节点磁盘:mkfs.xfs -f /dev/vdb(-f 强制) 4.3:执行heketi-cli topology load --json=topology.json报错如下
Found node k8s on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n]
Aborted wiping of xfs.
1 existing signature left on the device.
Found node k8s-node1 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n]
Aborted wiping of xfs.
1 existing signature left on the device.
Found node k8s-node2 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n]
Aborted wiping of xfs.
1 existing signature left on the device
解决:进入节点glusterfs容器执行pvcreate -ff --metadatasize=128M --dataalignment=256K /dev/vdb

            

      

  

      

5. 定义StorageClass
netstat -anp | grep 8080查看resturl地址,resturl必须设置为API Server能访问Heketi服务的地址
cat storageclass-gluster-heketi.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster-heketi
provisioner: kubernetes.io/glusterfs #此参数必须设置kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8080"
restauthenabled: "false" 6. 定义PVC
cat storageclass.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-gluster-heketi
spec:
storageClassName: gluster-heketi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kubectl get pvc

        

      

  

7. Pod使用pvc存储
cat pod-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-use-pvc
spec:
containers:
- name: pod-pvc
image: busybox
command:
- sleep
- "3600"
volumeMounts:
- name: gluster-volume
mountPath: "/mnt"
readOnly: false
volumes:
- name: gluster-volume
persistentVolumeClaim:
claimName: pvc-gluster-heketi
报错如下:
Warning FailedMount 9m34s kubelet, k8s-node2 ****: mount failed: mount failed: exit status 1
在node2上查看日志:
tail -100f /var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-5ea820ba-8538-11ea-8750-5254000e327c/pod-use-pvc-glusterfs.log
line 67: type 'features/utime' is not valid or not found on this machine 解决:查看node时间,glusterfs和k8s_glusterfs容器的时间和glusterfs版本不对
安装ntp,使用ntp ntp1.aliyun.com发现glusterfs容器时间一直同步了 升级node glusterfs版本,升级前node版本为3.12.x,容器内版本为7.1
yum install centos-release-gluster -y
yum install glusterfs-client -y升级后版本为7.5

      

  

    

8. 创建文件验证是否成功
在k8s集群master上执行:kubectl exec -ti pod-use-pvc -- /bin/sh
echo "hello world" > /mnt/b.txt
df -h: 查看挂载那台的glusterfs
在node节点进入glusterfs节点查看文件
docker exec -ti 89f927aa2110 /bin/bash
find / -name b.txt
cat /var/lib/heketi/mounts/vg_22e127efbdefc1bbb315ab0fcf90e779/brick_97de1365f98b19ee3b93ce8ecb588366/brick/b.txt 或者在k8s集群master上查看
进入相应的glusterfs集群几点
kubectl exec -ti glusterfs-h4k22 -- /bin/sh
find / -name b.txt
cat /var/lib/heketi/mounts/vg_22e127efbdefc1bbb315ab0fcf90e779/brick_97de1365f98b19ee3b93ce8ecb588366/brick/b.txt

    

    

    

k8s动态存储管理GlusterFS的更多相关文章

  1. kubernetes实战(九):k8s集群动态存储管理GlusterFS及使用Heketi扩容GlusterFS集群

    1.准备工作 所有节点安装GFS客户端 yum install glusterfs glusterfs-fuse -y 如果不是所有节点要部署GFS管理服务,就在需要部署的节点上打上标签 [root@ ...

  2. 动态存储管理实战:GlusterFS

    文件转载自:https://www.orchome.com/1284 本节以GlusterFS为例,从定义StorageClass.创建GlusterFS和Heketi服务.用户申请PVC到创建Pod ...

  3. k8s中应用GlusterFS类型StorageClass

    GlusterFS在Kubernetes中的应用 GlusterFS服务简介 GlusterFS是一个可扩展,分布式文件系统,集成来自多台服务器上的磁盘存储资源到单一全局命名空间,以提供共享文件存储. ...

  4. kubespy 用bash实现的k8s动态调试工具

    原文位于 https://github.com/huazhihao/kubespy/blob/master/implement-a-k8s-debug-plugin-in-bash.md 背景 Kub ...

  5. glusterfs+heketi为k8s提供共享存储

    背景 近来在研究k8s,学习到pv.pvc .storageclass的时候,自己捣腾的时候使用nfs手工提供pv的方式,看到官方文档大量文档都是使用storageclass来定义一个后端存储服务, ...

  6. 通过Heketi管理GlusterFS为K8S集群提供持久化存储

    参考文档: Github project:https://github.com/heketi/heketi MANAGING VOLUMES USING HEKETI:https://access.r ...

  7. 部署GlusterFS及Heketi

    一.前言及环境 在实践kubernetes的StateFulSet及各种需要持久存储的组件和功能时,通常会用到pv的动态供给,这就需要用到支持此类功能的存储系统了.在各类支持pv动态供给的存储系统中, ...

  8. 独立部署GlusterFS+Heketi实现Kubernetes共享存储

    目录 环境 glusterfs配置 安装 测试 heketi配置 部署 简介 修改heketi配置文件 配置ssh密钥 启动heketi 生产案例 heketi添加glusterfs 添加cluste ...

  9. 附009.Kubernetes永久存储之GlusterFS独立部署

    一 前期准备 1.1 基础知识 Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期.Heketi会动态在集群内选择bricks构建所需的volumes,从而确保数 ...

随机推荐

  1. node.js - 包、express

    首先,要先在这里分享一下我的喜悦,从昨天开始其实一直都在喜悦当中的,我收到了我的第一份offer,这感觉不摆了,比第一桶金都还舒服,虽然我还没收到第一桶金哈哈,不过offer都得了应该也快了. 今天的 ...

  2. 1903021116—吉琛—Java第三周作业—eclipse创建Java程序

    项目 内容 课程班级博客链接 19级信计班 这个作业要求链接 eclipse如何创建java程序 java语言基础(上) 我的课程学习目标 1. 学习博客园更多使用技巧 2. 通过作业实践熟练与熟悉掌 ...

  3. vue滚动分页加载

    做了一个项目,要求将后台数据滚动加载. 滚动加载必须要求后台传的接口中由pageSize和pageIndex两个参数,来判断每次传数据的条数和数据的页码. 首先要判断滑轮是向上滚动还是向下滚动,可以在 ...

  4. 可怕!CPU暗藏了这些未公开的指令!

    大家好,我是轩辕. 我们知道,我们平时编程写的高级语言,是经过编译器编译以后,变成了CPU可以执行的机器指令: 而CPU能支持的指令,都在它的指令集里面了. 很久以来,我都在思考一个问题: CPU有没 ...

  5. JavaScript学习②

    2. 基本对象: 1. Function:函数(方法)对象 1. 创建: 1. var fun = new Function(形式参数列表,方法体); //忘掉吧 2. function 方法名称(形 ...

  6. Blazor Bootstrap 组件库浏览器通知组件介绍

    通知组件 通过浏览器API发送通知信息 , 桌面浏览器表现为右下角系统提示框弹出消息, 移动浏览器表现为弹窗或者到消息列表, blazor页面不在前台也可以通过本组件提醒用户. DEMO https: ...

  7. AOT和单文件发布对程序性能的影响

    前言 这里先和大家介绍一下.NET一些发布的历史,以前的.NET框架原生并不支持最终编译结果的单文件发布(需要依赖第三方工具),我这里新建了一个简单的ASP.NET Core项目,发布以后的目录就会像 ...

  8. IDEA打包javaFX及踩坑解决

    开门见山的说,先打包,再说坑. File-->Project Structure --> Artifacts-->(此处点加号)JAR-->From modules with ...

  9. 《手把手教你》系列基础篇(九十四)-java+ selenium自动化测试-框架设计基础-POM设计模式实现-下篇(详解教程)

    1.简介 上一篇宏哥用PageFactory实现了POM,宏哥再介绍一下如果不用PageFactory如何实现POM. 2.项目实战 在这里宏哥以百度首页登录的例子,如果用POM实现,在测试脚本中实际 ...

  10. RabbitMQ 3.9( 基础 )

    1.认识MQ 1.1.什么是MQ? MQ全称:message queue 即 消息队列 这个队列遵循的原则:FIFO 即 先进先出 队列里面存的就是message 1.2.为什么要用MQ? 1.2.1 ...