一、简介

1. Prometheus

  • 一款开源的监控&报警&时间序列数据库的组合,起始是由 SoundCloud 公司开发的
  • 基本原理是通过 HTTP 协议周期性抓取被监控组件的状态,这样做的好处是任意组件只要提供 HTTP 接口就可以接入监控系统,不需要任何 SDK 或者其他的集成过程。这样做非常适合虚拟化环境比如 VM 或者 Docker
  • 输出被监控组件信息的 HTTP 接口被叫做 exporter 。目前互联网公司常用的组件大部分都有 exporter 可以直接使用,比如 Varnish、Haproxy、Nginx、MySQL、Linux 系统信息(包括磁盘、内存、CPU、网络等),具体支持的源看:https://github.com/prometheus
  • 特点:
    • 一个多维数据模型(时间序列由指标名称定义和设置键/值尺寸)
    • 非常高效的存储,平均一个采样数据占 ~3.5bytes 左右,320 万的时间序列,每 30 秒采样,保持 60 天,消耗磁盘大概 228G
    • 一种灵活的查询语言
    • 不依赖分布式存储,单个服务器节点
    • 时间集合通过 HTTP 上的 PULL 模型进行
    • 通过中间网关支持推送时间
    • 通过服务发现或静态配置发现目标
    • 多种模式的图形和仪表板支持

2. Grafana

  • 一个跨平台的开源的度量分析和可视化工具,可以通过将采集的数据查询然后可视化的展示,并及时通知
  • 特点:
    • 展示方式:快速灵活的客户端图表,面板插件有许多不同方式的可视化指标和日志,官方库中具有丰富的仪表盘插件,如热图、折线图、图表等多种展示方式
    • 数据源:Graphite,InfluxDB,OpenTSDB,Prometheus,Elasticsearch,CloudWatch 和 KairosDB 等
    • 通知提醒:以可视方式定义最重要指标的警报规则,Grafana 将不断计算并发送通知,在数据达到阈值时通过 Slack、PagerDuty 等获得通知
    • 混合展示:在同一图表中混合使用不同的数据源,可以基于每个查询指定数据源,甚至自定义数据源
    • 注释:使用来自不同数据源的丰富事件注释图表,将鼠标悬停在事件上会显示完整的事件元数据和标记
    • 过滤器:Ad-hoc 过滤器允许动态创建新的键/值过滤器,这些过滤器会自动应用于使用该数据源的所有查询

3. 效果展示

二、部署

  1. $ kubectl create ns ns-monitor
  2. $ kubectl create -f ...
  3. $ kubectl get all -n ns-monitor
  4. NAME READY STATUS RESTARTS AGE
  5. pod/node-exporter-rcbss 1/1 Running 0 4h41m
  6. pod/grafana-5567c66c9d-49b5w 1/1 Running 0 4h25m
  7. pod/prometheus-5ccc8db98f-lkwf5 1/1 Running 0 3h12m
  8. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  9. service/node-exporter-service NodePort 10.43.75.152 <none> 9100:31672/TCP 4h41m
  10. service/grafana-service NodePort 10.43.26.238 <none> 3000:32534/TCP 4h25m
  11. service/prometheus-service NodePort 10.43.174.110 <none> 9090:31396/TCP 3h12m

没有配置nodePort,端口随机生成

1. node-exporter

  • 用于采集 k8s 集群中各个节点的物理指标,如 Memory、CPU 等。可以直接在每个物理节点直接安装
  1. kind: DaemonSet
  2. apiVersion: apps/v1
  3. metadata:
  4. labels:
  5. app: node-exporter
  6. name: node-exporter
  7. namespace: ns-monitor
  8. spec:
  9. revisionHistoryLimit: 10
  10. selector:
  11. matchLabels:
  12. app: node-exporter
  13. template:
  14. metadata:
  15. labels:
  16. app: node-exporter
  17. spec:
  18. containers:
  19. - name: node-exporter
  20. image: prom/node-exporter:v0.16.0
  21. ports:
  22. - containerPort: 9100
  23. protocol: TCP
  24. name: http
  25. hostNetwork: true # 获得Node的物理指标信息
  26. hostPID: true # 获得Node的物理指标信息
  27. # tolerations: # Master节点
  28. # - effect: NoSchedule
  29. # operator: Exists
  30. ---
  31. kind: Service
  32. apiVersion: v1
  33. metadata:
  34. labels:
  35. app: node-exporter
  36. name: node-exporter-service
  37. namespace: ns-monitor
  38. spec:
  39. ports:
  40. - name: http
  41. port: 9100
  42. nodePort: 31672
  43. protocol: TCP
  44. type: NodePort
  45. selector:
  46. app: node-exporter

http://192.168.11.210:31672/metrics

2. Prometheus

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRole
  3. metadata:
  4. name: prometheus
  5. rules:
  6. - apiGroups: [""] # "" indicates the core API group
  7. resources:
  8. - nodes
  9. - nodes/proxy
  10. - services
  11. - endpoints
  12. - pods
  13. verbs:
  14. - get
  15. - watch
  16. - list
  17. - apiGroups:
  18. - extensions
  19. resources:
  20. - ingresses
  21. verbs:
  22. - get
  23. - watch
  24. - list
  25. - nonResourceURLs: ["/metrics"]
  26. verbs:
  27. - get
  28. ---
  29. apiVersion: v1
  30. kind: ServiceAccount
  31. metadata:
  32. name: prometheus
  33. namespace: ns-monitor
  34. labels:
  35. app: prometheus
  36. ---
  37. apiVersion: rbac.authorization.k8s.io/v1
  38. kind: ClusterRoleBinding
  39. metadata:
  40. name: prometheus
  41. subjects:
  42. - kind: ServiceAccount
  43. name: prometheus
  44. namespace: ns-monitor
  45. roleRef:
  46. kind: ClusterRole
  47. name: prometheus
  48. apiGroup: rbac.authorization.k8s.io
  49. ---
  50. apiVersion: v1
  51. kind: ConfigMap
  52. metadata:
  53. name: prometheus-conf
  54. namespace: ns-monitor
  55. labels:
  56. app: prometheus
  57. data:
  58. prometheus.yml: |-
  59. # my global config
  60. global:
  61. scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  62. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  63. # scrape_timeout is set to the global default (10s).
  64. # Alertmanager configuration
  65. alerting:
  66. alertmanagers:
  67. - static_configs:
  68. - targets:
  69. # - alertmanager:9093
  70. # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  71. rule_files:
  72. # - "first_rules.yml"
  73. # - "second_rules.yml"
  74. # A scrape configuration containing exactly one endpoint to scrape:
  75. # Here it's Prometheus itself.
  76. scrape_configs:
  77. # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  78. - job_name: 'prometheus'
  79. # metrics_path defaults to '/metrics'
  80. # scheme defaults to 'http'.
  81. static_configs:
  82. - targets: ['localhost:9090']
  83. - job_name: 'grafana'
  84. static_configs:
  85. - targets:
  86. - 'grafana-service.ns-monitor:3000'
  87. - job_name: 'kubernetes-apiservers'
  88. kubernetes_sd_configs:
  89. - role: endpoints
  90. # Default to scraping over https. If required, just disable this or change to
  91. # `http`.
  92. scheme: https
  93. # This TLS & bearer token file config is used to connect to the actual scrape
  94. # endpoints for cluster components. This is separate to discovery auth
  95. # configuration because discovery & scraping are two separate concerns in
  96. # Prometheus. The discovery auth config is automatic if Prometheus runs inside
  97. # the cluster. Otherwise, more config options have to be provided within the
  98. # <kubernetes_sd_config>.
  99. tls_config:
  100. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  101. # If your node certificates are self-signed or use a different CA to the
  102. # master CA, then disable certificate verification below. Note that
  103. # certificate verification is an integral part of a secure infrastructure
  104. # so this should only be disabled in a controlled environment. You can
  105. # disable certificate verification by uncommenting the line below.
  106. #
  107. # insecure_skip_verify: true
  108. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  109. # Keep only the default/kubernetes service endpoints for the https port. This
  110. # will add targets for each API server which Kubernetes adds an endpoint to
  111. # the default/kubernetes service.
  112. relabel_configs:
  113. - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
  114. action: keep
  115. regex: default;kubernetes;https
  116. # Scrape config for nodes (kubelet).
  117. #
  118. # Rather than connecting directly to the node, the scrape is proxied though the
  119. # Kubernetes apiserver. This means it will work if Prometheus is running out of
  120. # cluster, or can't connect to nodes for some other reason (e.g. because of
  121. # firewalling).
  122. - job_name: 'kubernetes-nodes'
  123. # Default to scraping over https. If required, just disable this or change to
  124. # `http`.
  125. scheme: https
  126. # This TLS & bearer token file config is used to connect to the actual scrape
  127. # endpoints for cluster components. This is separate to discovery auth
  128. # configuration because discovery & scraping are two separate concerns in
  129. # Prometheus. The discovery auth config is automatic if Prometheus runs inside
  130. # the cluster. Otherwise, more config options have to be provided within the
  131. # <kubernetes_sd_config>.
  132. tls_config:
  133. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  134. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  135. kubernetes_sd_configs:
  136. - role: node
  137. relabel_configs:
  138. - action: labelmap
  139. regex: __meta_kubernetes_node_label_(.+)
  140. - target_label: __address__
  141. replacement: kubernetes.default.svc:443
  142. - source_labels: [__meta_kubernetes_node_name]
  143. regex: (.+)
  144. target_label: __metrics_path__
  145. replacement: /api/v1/nodes/${1}/proxy/metrics
  146. # Scrape config for Kubelet cAdvisor.
  147. #
  148. # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
  149. # (those whose names begin with 'container_') have been removed from the
  150. # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
  151. # retrieve those metrics.
  152. #
  153. # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
  154. # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
  155. # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
  156. # the --cadvisor-port=0 Kubelet flag).
  157. #
  158. # This job is not necessary and should be removed in Kubernetes 1.6 and
  159. # earlier versions, or it will cause the metrics to be scraped twice.
  160. - job_name: 'kubernetes-cadvisor'
  161. # Default to scraping over https. If required, just disable this or change to
  162. # `http`.
  163. scheme: https
  164. # This TLS & bearer token file config is used to connect to the actual scrape
  165. # endpoints for cluster components. This is separate to discovery auth
  166. # configuration because discovery & scraping are two separate concerns in
  167. # Prometheus. The discovery auth config is automatic if Prometheus runs inside
  168. # the cluster. Otherwise, more config options have to be provided within the
  169. # <kubernetes_sd_config>.
  170. tls_config:
  171. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  172. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  173. kubernetes_sd_configs:
  174. - role: node
  175. relabel_configs:
  176. - action: labelmap
  177. regex: __meta_kubernetes_node_label_(.+)
  178. - target_label: __address__
  179. replacement: kubernetes.default.svc:443
  180. - source_labels: [__meta_kubernetes_node_name]
  181. regex: (.+)
  182. target_label: __metrics_path__
  183. replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
  184. # Scrape config for service endpoints.
  185. #
  186. # The relabeling allows the actual service scrape endpoint to be configured
  187. # via the following annotations:
  188. #
  189. # * `prometheus.io/scrape`: Only scrape services that have a value of `true`
  190. # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
  191. # to set this to `https` & most likely set the `tls_config` of the scrape config.
  192. # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
  193. # * `prometheus.io/port`: If the metrics are exposed on a different port to the
  194. # service then set this appropriately.
  195. - job_name: 'kubernetes-service-endpoints'
  196. kubernetes_sd_configs:
  197. - role: endpoints
  198. relabel_configs:
  199. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
  200. action: keep
  201. regex: true
  202. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
  203. action: replace
  204. target_label: __scheme__
  205. regex: (https?)
  206. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
  207. action: replace
  208. target_label: __metrics_path__
  209. regex: (.+)
  210. - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
  211. action: replace
  212. target_label: __address__
  213. regex: ([^:]+)(?::\d+)?;(\d+)
  214. replacement: $1:$2
  215. - action: labelmap
  216. regex: __meta_kubernetes_service_label_(.+)
  217. - source_labels: [__meta_kubernetes_namespace]
  218. action: replace
  219. target_label: kubernetes_namespace
  220. - source_labels: [__meta_kubernetes_service_name]
  221. action: replace
  222. target_label: kubernetes_name
  223. # Example scrape config for probing services via the Blackbox Exporter.
  224. #
  225. # The relabeling allows the actual service scrape endpoint to be configured
  226. # via the following annotations:
  227. #
  228. # * `prometheus.io/probe`: Only probe services that have a value of `true`
  229. - job_name: 'kubernetes-services'
  230. metrics_path: /probe
  231. params:
  232. module: [http_2xx]
  233. kubernetes_sd_configs:
  234. - role: service
  235. relabel_configs:
  236. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
  237. action: keep
  238. regex: true
  239. - source_labels: [__address__]
  240. target_label: __param_target
  241. - target_label: __address__
  242. replacement: blackbox-exporter.example.com:9115
  243. - source_labels: [__param_target]
  244. target_label: instance
  245. - action: labelmap
  246. regex: __meta_kubernetes_service_label_(.+)
  247. - source_labels: [__meta_kubernetes_namespace]
  248. target_label: kubernetes_namespace
  249. - source_labels: [__meta_kubernetes_service_name]
  250. target_label: kubernetes_name
  251. # Example scrape config for probing ingresses via the Blackbox Exporter.
  252. #
  253. # The relabeling allows the actual ingress scrape endpoint to be configured
  254. # via the following annotations:
  255. #
  256. # * `prometheus.io/probe`: Only probe services that have a value of `true`
  257. - job_name: 'kubernetes-ingresses'
  258. metrics_path: /probe
  259. params:
  260. module: [http_2xx]
  261. kubernetes_sd_configs:
  262. - role: ingress
  263. relabel_configs:
  264. - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
  265. action: keep
  266. regex: true
  267. - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
  268. regex: (.+);(.+);(.+)
  269. replacement: ${1}://${2}${3}
  270. target_label: __param_target
  271. - target_label: __address__
  272. replacement: blackbox-exporter.example.com:9115
  273. - source_labels: [__param_target]
  274. target_label: instance
  275. - action: labelmap
  276. regex: __meta_kubernetes_ingress_label_(.+)
  277. - source_labels: [__meta_kubernetes_namespace]
  278. target_label: kubernetes_namespace
  279. - source_labels: [__meta_kubernetes_ingress_name]
  280. target_label: kubernetes_name
  281. # Example scrape config for pods
  282. #
  283. # The relabeling allows the actual pod scrape endpoint to be configured via the
  284. # following annotations:
  285. #
  286. # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
  287. # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
  288. # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the
  289. # pod's declared ports (default is a port-free target if none are declared).
  290. - job_name: 'kubernetes-pods'
  291. kubernetes_sd_configs:
  292. - role: pod
  293. relabel_configs:
  294. - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
  295. action: keep
  296. regex: true
  297. - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
  298. action: replace
  299. target_label: __metrics_path__
  300. regex: (.+)
  301. - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
  302. action: replace
  303. regex: ([^:]+)(?::\d+)?;(\d+)
  304. replacement: $1:$2
  305. target_label: __address__
  306. - action: labelmap
  307. regex: __meta_kubernetes_pod_label_(.+)
  308. - source_labels: [__meta_kubernetes_namespace]
  309. action: replace
  310. target_label: kubernetes_namespace
  311. - source_labels: [__meta_kubernetes_pod_name]
  312. action: replace
  313. target_label: kubernetes_pod_name
  314. ---
  315. apiVersion: v1
  316. kind: ConfigMap
  317. metadata:
  318. name: prometheus-rules
  319. namespace: ns-monitor
  320. labels:
  321. app: prometheus
  322. data:
  323. cpu-usage.rule: |
  324. groups:
  325. - name: NodeCPUUsage
  326. rules:
  327. - alert: NodeCPUUsage
  328. expr: (100 - (avg by (instance) (irate(node_cpu{name="node-exporter",mode="idle"}[5m])) * 100)) > 75
  329. for: 2m
  330. labels:
  331. severity: "page"
  332. annotations:
  333. summary: "{{$labels.instance}}: High CPU usage detected"
  334. description: "{{$labels.instance}}: CPU usage is above 75% (current value is: {{ $value }})"
  335. ---
  336. apiVersion: v1
  337. kind: PersistentVolume
  338. metadata:
  339. name: "prometheus-data-pv"
  340. labels:
  341. name: prometheus-data-pv
  342. release: stable
  343. spec:
  344. capacity:
  345. storage: 5Gi
  346. accessModes:
  347. - ReadWriteOnce
  348. persistentVolumeReclaimPolicy: Recycle
  349. nfs:
  350. path: /nfs/prometheus/data
  351. server: 192.168.11.210
  352. ---
  353. apiVersion: v1
  354. kind: PersistentVolumeClaim
  355. metadata:
  356. name: prometheus-data-pvc
  357. namespace: ns-monitor
  358. spec:
  359. accessModes:
  360. - ReadWriteOnce
  361. resources:
  362. requests:
  363. storage: 5Gi
  364. selector:
  365. matchLabels:
  366. name: prometheus-data-pv
  367. release: stable
  368. ---
  369. kind: Deployment
  370. apiVersion: apps/v1
  371. metadata:
  372. labels:
  373. app: prometheus
  374. name: prometheus
  375. namespace: ns-monitor
  376. spec:
  377. replicas: 1
  378. revisionHistoryLimit: 10
  379. selector:
  380. matchLabels:
  381. app: prometheus
  382. template:
  383. metadata:
  384. labels:
  385. app: prometheus
  386. spec:
  387. serviceAccountName: prometheus
  388. securityContext:
  389. runAsUser: 0
  390. containers:
  391. - name: prometheus
  392. image: prom/prometheus:latest
  393. imagePullPolicy: IfNotPresent
  394. volumeMounts:
  395. - mountPath: /prometheus
  396. name: prometheus-data-volume
  397. - mountPath: /etc/prometheus/prometheus.yml
  398. name: prometheus-conf-volume
  399. subPath: prometheus.yml
  400. - mountPath: /etc/prometheus/rules
  401. name: prometheus-rules-volume
  402. ports:
  403. - containerPort: 9090
  404. protocol: TCP
  405. volumes:
  406. - name: prometheus-data-volume
  407. persistentVolumeClaim:
  408. claimName: prometheus-data-pvc
  409. - name: prometheus-conf-volume
  410. configMap:
  411. name: prometheus-conf
  412. - name: prometheus-rules-volume
  413. configMap:
  414. name: prometheus-rules
  415. tolerations:
  416. - key: node-role.kubernetes.io/master
  417. effect: NoSchedule
  418. ---
  419. kind: Service
  420. apiVersion: v1
  421. metadata:
  422. annotations:
  423. prometheus.io/scrape: 'true'
  424. labels:
  425. app: prometheus
  426. name: prometheus-service
  427. namespace: ns-monitor
  428. spec:
  429. ports:
  430. - port: 9090
  431. targetPort: 9090
  432. selector:
  433. app: prometheus
  434. type: NodePort

http://192.168.11.210:31396/targets

3. Grafana

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: "grafana-data-pv"
  5. labels:
  6. name: grafana-data-pv
  7. release: stable
  8. spec:
  9. capacity:
  10. storage: 5Gi
  11. accessModes:
  12. - ReadWriteOnce
  13. persistentVolumeReclaimPolicy: Recycle
  14. nfs:
  15. path: /nfs/grafana/data
  16. server: 192.168.11.210
  17. ---
  18. apiVersion: v1
  19. kind: PersistentVolumeClaim
  20. metadata:
  21. name: grafana-data-pvc
  22. namespace: ns-monitor
  23. spec:
  24. accessModes:
  25. - ReadWriteOnce
  26. resources:
  27. requests:
  28. storage: 5Gi
  29. selector:
  30. matchLabels:
  31. name: grafana-data-pv
  32. release: stable
  33. ---
  34. kind: Deployment
  35. apiVersion: apps/v1
  36. metadata:
  37. labels:
  38. app: grafana
  39. name: grafana
  40. namespace: ns-monitor
  41. spec:
  42. replicas: 1
  43. revisionHistoryLimit: 10
  44. selector:
  45. matchLabels:
  46. app: grafana
  47. template:
  48. metadata:
  49. labels:
  50. app: grafana
  51. spec:
  52. securityContext:
  53. runAsUser: 0
  54. containers:
  55. - name: grafana
  56. image: grafana/grafana:latest
  57. imagePullPolicy: IfNotPresent
  58. env:
  59. - name: GF_AUTH_BASIC_ENABLED
  60. value: "true"
  61. - name: GF_AUTH_ANONYMOUS_ENABLED
  62. value: "false"
  63. readinessProbe:
  64. httpGet:
  65. path: /login
  66. port: 3000
  67. volumeMounts:
  68. - mountPath: /var/lib/grafana
  69. name: grafana-data-volume
  70. ports:
  71. - containerPort: 3000
  72. protocol: TCP
  73. volumes:
  74. - name: grafana-data-volume
  75. persistentVolumeClaim:
  76. claimName: grafana-data-pvc
  77. ---
  78. kind: Service
  79. apiVersion: v1
  80. metadata:
  81. labels:
  82. app: grafana
  83. name: grafana-service
  84. namespace: ns-monitor
  85. spec:
  86. ports:
  87. - port: 3000
  88. targetPort: 3000
  89. selector:
  90. app: grafana
  91. type: NodePort

http://192.168.11.210:32534

配置数据源

Import dashboard from file(非必须)

https://files.cnblogs.com/files/lb477/Kubernetes-Pod-Resources.json


参考:https://www.jianshu.com/p/ac8853927528

K8s 部署 Prometheus + Grafana的更多相关文章

  1. k8s实战之部署Prometheus+Grafana可视化监控告警平台

    写在前面 之前部署web网站的时候,架构图中有一环节是监控部分,并且搭建一套有效的监控平台对于运维来说非常之重要,只有这样才能更有效率的保证我们的服务器和服务的稳定运行,常见的开源监控软件有好几种,如 ...

  2. k8b部署prometheus+grafana

    来源: https://juejin.im/post/5c36054251882525a50bbdf0 https://github.com/redhatxl/k8s-prometheus-grafa ...

  3. 部署Prometheus+Grafana监控

    Prometheus 1.不是很友好,各种配置都手写 2.对docker和k8s监控有成熟解决方案 Prometheus(普罗米修斯) 是一个最初在SoudCloud上构建的监控系统,开源项目,拥有非 ...

  4. kubenetes部署prometheus+grafana

    文章目录 环境介绍 创建node-exporter 创建Prometheus 创建Grafana 测试 环境介绍 # 关于k8s的集群部署,可以查看我其他博客 [root@master ~]# cat ...

  5. k8s部署prometheus

    https://www.kancloud.cn/huyipow/prometheus/527092 https://songjiayang.gitbooks.io/prometheus/content ...

  6. Rancher2.x 一键式部署 Prometheus + Grafana 监控 Kubernetes 集群

    目录 1.Prometheus & Grafana 介绍 2.环境.软件准备 3.Rancher 2.x 应用商店 4.一键式部署 Prometheus 5.验证 Prometheus + G ...

  7. 群晖-使用docker套件部署Prometheus+Grafana

    Docker 部署 Prometheus 说明: 先在群辉管理界面安装好docker套件,修改一下镜像源(更快一点) 所需容器如下 Prometheus Server(普罗米修斯监控主服务器 ) No ...

  8. Kubernetes部署Prometheus+Grafana(非存储持久化方式部署)

    1.在master节点处新建一个文件夹,用于保存下载prometheus+granfana的yaml文件 mkdir /root/prometheus cd /root/prometheus git ...

  9. 【Linux】【Services】【SaaS】Docker+kubernetes(12. 部署prometheus/grafana/Influxdb实现监控)

    1.简介 1.1. 官方网站: promethos:https://prometheus.io/ grafana:https://grafana.com/ 1.2. 架构图 2. 环境 2.1. 机器 ...

随机推荐

  1. 【近取 key】Alpha 阶段任务分配

    项目 内容 这个作业属于哪个课程 2021春季计算机学院软件工程(罗杰 任健) 这个作业的要求在哪里 alpha阶段初始任务分配 我在这个课程的目标是 进一步提升工程化开发能力,积累团队协作经验,熟悉 ...

  2. CRM助力企业迎接数字化浪潮

    去年,国家发展改革委官网发布'数字化转型伙伴行动'倡议.倡议政府和社会各界联合起来,共同构建多元化的联合推荐机制,带动全行业数字化转型,构建数字化产业链,培育数字化生态,形成"数字引领.抗击 ...

  3. Spring context的refresh函数执行过程分析

    今天看了一下Spring Boot的run函数运行过程,发现它调用了Context中的refresh函数.所以先分析一下Spring context的refresh过程,然后再分析Spring boo ...

  4. 基于多IP地址Web服务

    [Centos7.4版本] !!!测试环境我们首关闭防火墙和selinux [root@localhost ~]# systemctl stop firewalld [root@localhost ~ ...

  5. Linux_配置匿名访问FTP服务

    [RHEL8]-FTPserver:[Centos7]-FTPclient !!!测试环境我们首关闭防火墙和selinux(FTPserver和FTPclient都需要) [root@localhos ...

  6. 搭建LNMP环境部署Wordpress博客

    !!!首先要做的就是关闭系统的防火墙以及selinux: #systemctl stop firewalld #systemctl disable firewalld #sed -ri 's/^(SE ...

  7. Linux服务之nginx服务篇一(概念)

    nginx官网:http://nginx.org/ 一. nginx和apache的区别 Nginx: 1.轻量级,采用 C 进行编写,同样的 web 服务,会占用更少的内存及资源. 2.抗并发,ng ...

  8. S6 文件备份与压缩命令

    6.1 tar:打包备份 6.2 gzip:压缩或解压文件 6.3-4 zip.unzip 6.5 scp:远程文件复制 6.6 rsync:文件同步工具

  9. PHP相关session的知识

    由于http协议是一种无状态协议,所以没有办法在多个页面间保持一些信息.例如,用户的登录状态,不可能让用户每浏览一个页面登录一次.session就是为了解决一些需要在多页面间持久保持一种状态的机制.P ...

  10. C语言关于指针函数与函数指针个人理解

    1,函数指针 顾名思义,即指向函数的指针,功能与其他指针相同,该指针变量保存的是所指向函数的地址. 假如是void类型函数指针定义方式可以是 void (*f)(参数列表);亦可以先用 typedef ...