1. 在node上安装Gluster客户端(Heketi要求GlusterFS集群至少有三个节点)
删除master标签
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl describe node k8s查看taint是否为空
查看kube-apiserver是否以特权模式运行:
ps -ef | grep kube | grep allow
给每个node打上标签:
kubectl label node k8s storagenode=glusterfs
kubectl label node k8s-node1 storagenode=glusterfs
kubectl label node k8s-node2 storagenode=glusterfs 2. 确保每个node上运行一个GlusterFS管理服务
cat glusterfs.yaml
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: glusterfs
labels:
glusterfs: daemonsett
annotations:
description: GlusterFS DaemonSet
tags: glusterfs
spec:
template:
metadata:
name: glusterfs
labels:
glusterfs-node: pod
spec:
nodeSelector:
storagenode: glusterfs
hostNetwork: true
containers:
- image: gluster/gluster-centos:latest
name: glusterfs
volumeMounts:
- name: glusterfs-heketi
mountPath: "/var/lib/heketi"
- name: glusterfs-run
mountPath: "/run"
- name: glusterfs-lvm
mountPath: "/run/lvm"
- name: glusterfs-etc
mountPath: "/etc/glusterfs"
- name: glusterfs-logs
mountPath: "/var/log/glusterfs"
- name: glusterfs-config
mountPath: "/var/lib/glusterd"
- name: glusterfs-dev
mountPath: "/dev"
- name: glusterfs-misc
mountPath: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
mountPath: "/sys/fs/cgroup"
readOnly: true
- name: glusterfs-ssl
mountPath: "/etc/ssl"
readOnly: true
securityContext:
capabilities: {}
privileged: true
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
volumes:
- name: glusterfs-heketi
hostPath:
path: "/var/lib/heketi"
- name: glusterfs-run
- name: glusterfs-lvm
hostPath:
path: "/run/lvm"
- name: glusterfs-etc
hostPath:
path: "/etc/glusterfs"
- name: glusterfs-logs
hostPath:
path: "/var/log/glusterfs"
- name: glusterfs-config
hostPath:
path: "/var/lib/glusterd"
- name: glusterfs-dev
hostPath:
path: "/dev"
- name: glusterfs-misc
hostPath:
path: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
hostPath:
path: "/sys/fs/cgroup"
- name: glusterfs-ssl
hostPath:
path: "/etc/ssl"
kubectl create -f glusterfs.yaml && kubectl describe pods <pod_name>

            

2. 创建Heketi服务
创建一个ServiceAccount对象
cat heketi-service.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heketi-service-account
kubectl create -f heketi-service.yaml
部署heketi服务:
cat heketi-svc.yaml
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-deployment
deploy-heketi: heket-deployment
annotations:
description: Defines how to deploy Heketi
spec:
replicas: 1
template:
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-pod
name: deploy-heketi
spec:
serviceAccountName: heketi-service-account
containers:
- image: heketi/heketi
imagePullPolicy: IfNotPresent
name: deploy-heketi
env:
- name: HEKETI_EXECUTOR
value: kubernetes
- name: HEKETI_FSTAB
value: "/var/lib/heketi/fstab"
- name: HEKETI_SNAPSHOT_LIMIT
value: '14'
- name: HEKETI_KUBE_GLUSTER_DAEMONSET
value: "y"
ports:
- containerPort: 8080
volumeMounts:
- name: db
mountPath: "/var/lib/heketi"
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 3
httpGet:
path: "/hello"
port: 8080
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 30
httpGet:
path: "/hello"
port: 8080
volumes:
- name: db
hostPath:
path: "/heketi-data" ---
kind: Service
apiVersion: v1
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-service
deploy-heketi: support
annotations:
description: Exposes Heketi Service
spec:
selector:
name: deploy-heketi
ports:
- name: deploy-heketi
port: 8080
targetPort: 8080
kubectl create -f heketi-service.yaml && kubectl get svc && kubectl get deploument
   kubectl describe pod deploy-heketi查看运行在哪个node

            

3. Heketi安装
  yum install -y centos-release-gluster
  yum install -y heketi heketi-client
cat topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"k8s"
],
"storage": [
"192.168.66.86"
]
},
"zone": 1
},
"devices": [
"/dev/vdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node1"
],
"storage": [
"192.168.66.87"
]
},
"zone": 1
},
"devices": [
"/dev/vdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node2"
],
"storage": [
"192.168.66.84"
]
},
"zone": 1
},
"devices": [
"/dev/vdb"
]
}
]
}
]
}   HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep deploy-heketi | awk '{print $1}')
 kubectl port-forward $HEKETI_BOOTSTRAP_POD 8080:8080 &后台启动
  export HEKETI_CLI_SERVER=http://localhost:8080
  heketi-cli topology load --json=topology.json
  heketi-cli topology info

            

            

4. 报错处理
4.1:执行heketi-cli topology load --json=topology.json报错如下
Creating cluster ... ID: 76576f2209ccd75a0ab1e44fc38fd393
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s ... Unable to create node: New Node doesn't have glusterd running
Creating node k8s-node1 ... Unable to create node: New Node doesn't have glusterd running
Creating node k8s-node2 ... Unable to create node: New Node doesn't have glusterd running
解决:kubectl create clusterrole fao --verb=get,list,watch,create --resource=pods,pods/status,pods/exec如还报错
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account 4.2:执行heketi-cli topology load --json=topology.json报错如下
Found node k8s on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem?
Can't open /dev/vdb exclusively. Mounted filesystem?
Found node k8s-node1 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem?
Can't open /dev/vdb exclusively. Mounted filesystem?
Found node k8s-node2 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem?
Can't open /dev/vdb exclusively. Mounted filesystem?
解决:格式化节点磁盘:mkfs.xfs -f /dev/vdb(-f 强制) 4.3:执行heketi-cli topology load --json=topology.json报错如下
Found node k8s on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n]
Aborted wiping of xfs.
1 existing signature left on the device.
Found node k8s-node1 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n]
Aborted wiping of xfs.
1 existing signature left on the device.
Found node k8s-node2 on cluster 88bed810717c204761b99c7ec1b71cd0
Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n]
Aborted wiping of xfs.
1 existing signature left on the device
解决:进入节点glusterfs容器执行pvcreate -ff --metadatasize=128M --dataalignment=256K /dev/vdb

            

      

  

      

5. 定义StorageClass
netstat -anp | grep 8080查看resturl地址,resturl必须设置为API Server能访问Heketi服务的地址
cat storageclass-gluster-heketi.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster-heketi
provisioner: kubernetes.io/glusterfs #此参数必须设置kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8080"
restauthenabled: "false" 6. 定义PVC
cat storageclass.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-gluster-heketi
spec:
storageClassName: gluster-heketi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kubectl get pvc

        

      

  

7. Pod使用pvc存储
cat pod-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-use-pvc
spec:
containers:
- name: pod-pvc
image: busybox
command:
- sleep
- "3600"
volumeMounts:
- name: gluster-volume
mountPath: "/mnt"
readOnly: false
volumes:
- name: gluster-volume
persistentVolumeClaim:
claimName: pvc-gluster-heketi
报错如下:
Warning FailedMount 9m34s kubelet, k8s-node2 ****: mount failed: mount failed: exit status 1
在node2上查看日志:
tail -100f /var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-5ea820ba-8538-11ea-8750-5254000e327c/pod-use-pvc-glusterfs.log
line 67: type 'features/utime' is not valid or not found on this machine 解决:查看node时间,glusterfs和k8s_glusterfs容器的时间和glusterfs版本不对
安装ntp,使用ntp ntp1.aliyun.com发现glusterfs容器时间一直同步了 升级node glusterfs版本,升级前node版本为3.12.x,容器内版本为7.1
yum install centos-release-gluster -y
yum install glusterfs-client -y升级后版本为7.5

      

  

    

8. 创建文件验证是否成功
在k8s集群master上执行:kubectl exec -ti pod-use-pvc -- /bin/sh
echo "hello world" > /mnt/b.txt
df -h: 查看挂载那台的glusterfs
在node节点进入glusterfs节点查看文件
docker exec -ti 89f927aa2110 /bin/bash
find / -name b.txt
cat /var/lib/heketi/mounts/vg_22e127efbdefc1bbb315ab0fcf90e779/brick_97de1365f98b19ee3b93ce8ecb588366/brick/b.txt 或者在k8s集群master上查看
进入相应的glusterfs集群几点
kubectl exec -ti glusterfs-h4k22 -- /bin/sh
find / -name b.txt
cat /var/lib/heketi/mounts/vg_22e127efbdefc1bbb315ab0fcf90e779/brick_97de1365f98b19ee3b93ce8ecb588366/brick/b.txt

    

    

    

k8s动态存储管理GlusterFS的更多相关文章

  1. kubernetes实战(九):k8s集群动态存储管理GlusterFS及使用Heketi扩容GlusterFS集群

    1.准备工作 所有节点安装GFS客户端 yum install glusterfs glusterfs-fuse -y 如果不是所有节点要部署GFS管理服务,就在需要部署的节点上打上标签 [root@ ...

  2. 动态存储管理实战:GlusterFS

    文件转载自:https://www.orchome.com/1284 本节以GlusterFS为例,从定义StorageClass.创建GlusterFS和Heketi服务.用户申请PVC到创建Pod ...

  3. k8s中应用GlusterFS类型StorageClass

    GlusterFS在Kubernetes中的应用 GlusterFS服务简介 GlusterFS是一个可扩展,分布式文件系统,集成来自多台服务器上的磁盘存储资源到单一全局命名空间,以提供共享文件存储. ...

  4. kubespy 用bash实现的k8s动态调试工具

    原文位于 https://github.com/huazhihao/kubespy/blob/master/implement-a-k8s-debug-plugin-in-bash.md 背景 Kub ...

  5. glusterfs+heketi为k8s提供共享存储

    背景 近来在研究k8s,学习到pv.pvc .storageclass的时候,自己捣腾的时候使用nfs手工提供pv的方式,看到官方文档大量文档都是使用storageclass来定义一个后端存储服务, ...

  6. 通过Heketi管理GlusterFS为K8S集群提供持久化存储

    参考文档: Github project:https://github.com/heketi/heketi MANAGING VOLUMES USING HEKETI:https://access.r ...

  7. 部署GlusterFS及Heketi

    一.前言及环境 在实践kubernetes的StateFulSet及各种需要持久存储的组件和功能时,通常会用到pv的动态供给,这就需要用到支持此类功能的存储系统了.在各类支持pv动态供给的存储系统中, ...

  8. 独立部署GlusterFS+Heketi实现Kubernetes共享存储

    目录 环境 glusterfs配置 安装 测试 heketi配置 部署 简介 修改heketi配置文件 配置ssh密钥 启动heketi 生产案例 heketi添加glusterfs 添加cluste ...

  9. 附009.Kubernetes永久存储之GlusterFS独立部署

    一 前期准备 1.1 基础知识 Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期.Heketi会动态在集群内选择bricks构建所需的volumes,从而确保数 ...

随机推荐

  1. i.MX rt 系列微控制器的学习记录

    杂记 前言 我总是很希望自己能产生一种感知电压变化的能力,就像B站上的教学动图中,电流从电源流出时导线就像LED亮起来一样,我将指尖触到导线上就能感受到实时的电压变化.我在上学和工作时经常由于无法理解 ...

  2. 论文解读(MLGCL)《Multi-Level Graph Contrastive Learning》

    论文信息 论文标题:Structural and Semantic Contrastive Learning for Self-supervised Node Representation Learn ...

  3. NodeJs学习日报day5——导入模块

    const { match } = require("assert") function dateFormat(dataStr) { const dt = new Date(dat ...

  4. JavaWeb学习day7-Response初学3(重定向)

    重定向:web资源收到客户端请求后,通知客户端去访问另外一个web资源 1 protected void doGet(HttpServletRequest req, HttpServletRespon ...

  5. .NET宝藏API之:IHostedService,后台任务执行

    我们在项目开发的过程中可能会遇到类似后台定时任务的需求,比如消息队列的消费者. 按照.NetF时的开发习惯首先想到的肯定是Windows Service,拜托,都什么年代了还用Windows服务(小声 ...

  6. CEPH-5:ceph集群基本概念与管理

    ceph集群基本概念与管理 ceph集群基本概念 ceph集群整体结构图 名称 作用 osd 全称Object Storage Device,主要功能是存储数据.复制数据.平衡数据.恢复数据等.每个O ...

  7. Response.Write中文乱码问题

    接手别人的一个ASP项目,功能是页面按钮下载Excel导出数据. 每次导出某一天的数据会出现excel中文乱码,其他天又没问题,因为数据量比较大,所以没有逐条去检查. 找了一些资料 https://w ...

  8. Apache Struts 2 漏洞汇总

    Apache Struts2 是一个基于MVC设计模式的Web应用框架,会对某些标签属性(比如 id)的属性值进行二次表达式解析,因此在某些场景下将可能导致远程代码执行. Struts2特征: 通过页 ...

  9. Java学习笔记-基础语法Ⅹ-进程线程

    学习快一个月了,现在学到了黑马Java教程的300集 打印流的特点: 只负责输出数据,不负责读取数据 有自己的特有方法 字节打印流:PrintStream,使用指定的文件名创建新的打印流 import ...

  10. Elasticsearch高级之-集群搭建,数据分片

    目录 Elasticsearch高级之-集群搭建,数据分片 一 广播方式 二 单播方式 三 选取主节点 四 什么是脑裂 五 错误识别 Elasticsearch高级之-集群搭建,数据分片 es使用两种 ...