附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合性方案
一 glusterfs存储集群部署
1.1 架构示意
1.2 相关规划
主机
|
IP
|
磁盘
|
备注
|
k8smaster01
|
172.24.8.71
|
——
|
Kubernetes Master节点
Heketi主机
|
k8smaster02
|
172.24.8.72
|
——
|
Kubernetes Master节点
Heketi主机
|
k8smaster03
|
172.24.8.73
|
——
|
Kubernetes Master节点
Heketi主机
|
k8snode01
|
172.24.8.74
|
sdb
|
Kubernetes Worker节点
glusterfs 01节点
|
k8snode02
|
172.24.8.75
|
sdb
|
Kubernetes Worker节点
glusterfs 02节点
|
k8snode03
|
172.24.8.76
|
sdb
|
Kubernetes Worker节点
glusterfs 03节点
|
1.3 安装glusterfs
1.4 添加信任池
1.5 安装heketi
1.6 配置heketi
{
"_port_comment": "Heketi Server Port Number",
"port": "", "_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": true, "_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "admin123"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "xianghy"
}
}, "_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": [
"Execute plugin. Possible choices: mock, ssh",
"mock: This setting is used for testing and development.",
" It will not send commands to any node.",
"ssh: This setting will notify Heketi to ssh to the nodes.",
" It will need the values in sshexec to be configured.",
"kubernetes: Communicate with GlusterFS containers over",
" Kubernetes exec api."
],
"executor": "ssh", "_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/heketi_key",
"user": "root",
"port": "",
"fstab": "/etc/fstab"
}, "_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db", "_loglevel_comment": [
"Set log level. Choices are:",
" none, critical, error, warning, info, debug",
"Default is warning"
],
"loglevel" : "warning"
}
}
1.7 配置免秘钥
1.8 启动heketi
1.9 配置Heketi拓扑
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"k8snode01"
],
"storage": [
"172.24.8.74"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8snode02"
],
"storage": [
"172.24.8.75"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"k8snode03"
],
"storage": [
"172.24.8.76"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
}
]
}
]
}
1.10 集群管理及测试
1.11 创建StorageClass
apiVersion: v1
kind: Secret
metadata:
name: heketi-secret
namespace: heketi
data:
key: YWRtaW4xMjM=
type: kubernetes.io/glusterfs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ghstorageclass
parameters:
resturl: "http://172.24.8.71:8080"
clusterid: "ad0f81f75f01d01ebd6a21834a2caa30"
restauthenabled: "true"
restuser: "admin"
secretName: "heketi-secret"
secretNamespace: "heketi"
volumetype: "replicate:3"
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
二 集群监控Metrics
2.1 开启聚合层
2.2 获取部署文件
……
image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6 #修改为国内源
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP #添加如上command
……
2.3 正式部署
2.4 确认验证
三 Prometheus部署
3.1 获取部署文件
3.2 创建命名空间
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
3.3 创建RBAC
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: monitoring #仅需修改命名空间
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring #仅需修改命名空间
3.4 创建Prometheus ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-server-conf
labels:
name: prometheus-server-conf
namespace: monitoring #修改命名空间
……
3.5 创建持久PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pvc
namespace: monitoring
annotations:
volume.beta.kubernetes.io/storage-class: ghstorageclass
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
3.6 Prometheus部署
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus-server
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus-server
image: prom/prometheus:v2.14.0
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--storage.tsdb.retention=72h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
serviceAccountName: prometheus
imagePullSecrets:
- name: regsecret
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: prometheus-pvc
3.7 创建Prometheus Service
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus-service
name: prometheus-service
namespace: monitoring
spec:
type: NodePort
selector:
app: prometheus-server
ports:
- port: 9090
targetPort: 9090
nodePort: 30001
3.8 确认验证Prometheus
四 部署grafana
4.1 获取部署文件
4.2 创建持久PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-data-pvc
namespace: monitoring
annotations:
volume.beta.kubernetes.io/storage-class: ghstorageclass
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
4.3 grafana部署
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: monitoring
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:6.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: ""
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
value: /
readinessProbe:
httpGet:
path: /login
port: 3000
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-data-pvc
nodeSelector:
node-role.kubernetes.io/master: "true"
tolerations:
- key: "node-role.kubernetes.io/master"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
annotations:
prometheus.io/scrape: 'true'
prometheus.io/tcp-probe: 'true'
prometheus.io/tcp-probe-port: '80'
name: monitoring-grafana
namespace: monitoring
spec:
type: NodePort
ports:
- port: 80
targetPort: 3000
nodePort: 30002
selector:
k8s-app: grafana
4.4 确认验证Prometheus
4.4 grafana配置
- 添加数据源:略
- 创建用户:略
4.5 查看监控
附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合性方案的更多相关文章
- 附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合解决方案
一 glusterfs存储集群部署 注意:以下为简略步骤,详情参考<附009.Kubernetes永久存储之GlusterFS独立部署>. 1.1 架构示意 略 1.2 相关规划 主机 I ...
- Kubernetes+Prometheus+Grafana部署笔记
一.基础概念 1.1 基础概念 Kubernetes(通常写成“k8s”)Kubernetes是Google开源的容器集群管理系统.其设计目标是在主机集群之间提供一个能够自动化部署.可拓展.应用容器可 ...
- Kubernetes使用prometheus+grafana做一个简单的监控方案
前言 本文介绍在k8s集群中使用node-exporter.prometheus.grafana对集群进行监控.其实现原理有点类似ELK.EFK组合.node-exporter组件负责收集节点上的me ...
- Kubernetes prometheus+grafana k8s 监控
参考: https://www.cnblogs.com/terrycy/p/10058944.html https://www.cnblogs.com/weiBlog/p/10629966.html ...
- 附024.Kubernetes全系列大总结
Kubernetes全系列总结如下,后期不定期更新.欢迎基于学习.交流目的的转载和分享,禁止任何商业盗用,同时希望能带上原文出处,尊重ITer的成果,也是尊重知识.若发现任何错误或纰漏,留言反馈或右侧 ...
- SpringBoot+Prometheus+Grafana实现应用监控和报警
一.背景 SpringBoot的应用监控方案比较多,SpringBoot+Prometheus+Grafana是目前比较常用的方案之一.它们三者之间的关系大概如下图: 关系图 二.开发SpringBo ...
- 使用 Prometheus + Grafana 对 Kubernetes 进行性能监控的实践
1 什么是 Kubernetes? Kubernetes 是 Google 开源的容器集群管理系统,其管理操作包括部署,调度和节点集群间扩展等. 如下图所示为目前 Kubernetes 的架构图,由 ...
- [转帖]Prometheus+Grafana监控Kubernetes
原博客的位置: https://blog.csdn.net/shenhonglei1234/article/details/80503353 感谢原作者 这里记录一下自己试验过程中遇到的问题: . 自 ...
- 附010.Kubernetes永久存储之GlusterFS超融合部署
一 前期准备 1.1 基础知识 在Kubernetes中,使用GlusterFS文件系统,操作步骤通常是: 创建brick-->创建volume-->创建PV-->创建PVC--&g ...
随机推荐
- 隐私标签(Privacy.Tag):轻轻一贴,愉快拍照!
用相机去捕捉精彩瞬间,用照片来记录生活足迹,并实时地与朋友们分享当下的心情,似乎已成为我们忙碌生活中最有趣的调味剂.但随着移动设备照相功能的日益完善,以及各大社交平台的不断兴起,很多时候,你是否也会有 ...
- 通过virt-manager给Windowsxp系统配置virtio驱动
在虚拟机的detail上添加一个硬件设备. 下载virtio.iso文件,我使用的版本126,具体的virtio驱动放到了10.2上的itest的目录中,使用可以找. 在配置中,添加virtio硬盘. ...
- 对Java tutorial-examples中hello2核心代码分析
1.在hello2中有两个.java源文件分别是GreetingServlet.Java和ResponseServlet.jva文件主要对以下核心代码做主要分析. String username = ...
- 那些被刻意“阉割”的名人名言
"天才是百分之一的灵感,百分之九十九的汗水",这句名言大家都知道的吧!不过还有好多人不知道的是这句名言还有后半句:"但百分之一的灵感甚至比百分之九十九的汗水更重要.&qu ...
- Python---4字符串与编码
字符编码 字符串比较特殊的是还有一个编码问题. 因为计算机只能处理数字,如果要处理文本,就必须先把文本转换为数字才能处理.最早的计算机在设计时采用8个比特(bit)作为一个字节(byte),所以,一个 ...
- Redis-输入输出缓冲区
一.client list id:客户端连接的唯一标识,这个id是随着Redis的连接自增的,重启Redis后会重置为0addr:客户端连接的ip和端口fd:socket的文件描述符,与lsof命令结 ...
- OPPO招聘-互联网测试
邮 箱:ljy@oppo.com 工作地点:深圳
- 【Hardware】i386、x86和x64的故事
(1)x86的由来 x86架构首度出现在1978年推出的Intel 8086中央处理器,它是从Intel 8008处理器中发展而来的,而8008则是发展自Intel 4004的.在8086之后,Int ...
- 【NOIP14 D2T2】寻找道路
Source and Judge NOIP2014 提高组 D2T2Luogu2296Caioj1567 Problem [Description] 在有向图 G 中,每条边的长度均为 1,现给定起点 ...
- 恭喜你,Get到一份 正则表达式 食用指南
先赞后看,养成习惯 前言 正则表达式 正则表达式: 定义一个搜索模式的字符串. 正则表达式可以用于搜索.编辑和操作文本. 正则对文本的分析或修改过程为:首先正则表达式应用的是文本字符串(text/st ...