问题来源

  1. 海口-老男人 17:42:43
  2. 就是我要运行一个nodejs服务,先发布为deployment,然后创建service,让集群外可以访问
  3. 旧报纸 17:43:35
  4. 也就是 你的需求为 一个app delployment 发布 然后service 然后 代理 对吧

解决方案:

  1. 部署traefik
  2. 发布deployment pod
  3. 创建service
  4. traefik 代理service
  5. 外部访问

环境

docker 19.03.5

kubernetes 1.17.2

traefik部署

为什么选择 traefik,抛弃nginx:

  1. https://www.php.cn/nginx/422461.html
  2. 特别说明:traefik 2.1 版本更新了灰色发布和流量复制功能 为容器/微服务而生
  3. traefik 官网写的都很不错了 附带一篇 快速入门博文:https://www.qikqiak.com/post/traefik-2.1-101/
  4. 建议部署traefik 不要用上述链接的方案

注意:这里 Traefik 是部署在 assembly Namespace 下,如果不是需要修改下面部署文件中的 Namespace 属性。

traefik-crd.yaml

在 traefik v2.1 版本后,开始使用 CRD(Custom Resource Definition)来完成路由配置等,所以需要提前创建 CRD 资源。

  1. # cat traefik-crd.yaml
  2. ##########################################################################
  3. #Author: zisefeizhu
  4. #QQ: 2********0
  5. #Date: 2020-03-16
  6. #FileName: traefik-crd.yaml
  7. #URL: https://www.cnblogs.com/zisefeizhu/
  8. #Description: The test script
  9. #Copyright (C): 2020 All rights reserved
  10. ###########################################################################
  11. ## IngressRoute
  12. apiVersion: apiextensions.k8s.io/v1beta1
  13. kind: CustomResourceDefinition
  14. metadata:
  15. name: ingressroutes.traefik.containo.us
  16. spec:
  17. scope: Namespaced
  18. group: traefik.containo.us
  19. version: v1alpha1
  20. names:
  21. kind: IngressRoute
  22. plural: ingressroutes
  23. singular: ingressroute
  24. ---
  25. ## IngressRouteTCP
  26. apiVersion: apiextensions.k8s.io/v1beta1
  27. kind: CustomResourceDefinition
  28. metadata:
  29. name: ingressroutetcps.traefik.containo.us
  30. spec:
  31. scope: Namespaced
  32. group: traefik.containo.us
  33. version: v1alpha1
  34. names:
  35. kind: IngressRouteTCP
  36. plural: ingressroutetcps
  37. singular: ingressroutetcp
  38. ---
  39. ## Middleware
  40. apiVersion: apiextensions.k8s.io/v1beta1
  41. kind: CustomResourceDefinition
  42. metadata:
  43. name: middlewares.traefik.containo.us
  44. spec:
  45. scope: Namespaced
  46. group: traefik.containo.us
  47. version: v1alpha1
  48. names:
  49. kind: Middleware
  50. plural: middlewares
  51. singular: middleware
  52. ---
  53. ## TLSOption
  54. apiVersion: apiextensions.k8s.io/v1beta1
  55. kind: CustomResourceDefinition
  56. metadata:
  57. name: tlsoptions.traefik.containo.us
  58. spec:
  59. scope: Namespaced
  60. group: traefik.containo.us
  61. version: v1alpha1
  62. names:
  63. kind: TLSOption
  64. plural: tlsoptions
  65. singular: tlsoption
  66. ---
  67. ## TraefikService
  68. apiVersion: apiextensions.k8s.io/v1beta1
  69. kind: CustomResourceDefinition
  70. metadata:
  71. name: traefikservices.traefik.containo.us
  72. spec:
  73. scope: Namespaced
  74. group: traefik.containo.us
  75. version: v1alpha1
  76. names:
  77. kind: TraefikService
  78. plural: traefikservices
  79. singular: traefikservice

创建RBAC权限

Traefik 需要一定的权限,所以这里提前创建好 Traefik ServiceAccount 并分配一定的权限。

  1. # cat traefik-rbac.yaml
  2. ##########################################################################
  3. #Author: zisefeizhu
  4. #QQ: 2********0
  5. #Date: 2020-03-16
  6. #FileName: traefik-rbac.yaml
  7. #URL: https://www.cnblogs.com/zisefeizhu/
  8. #Description: The test script
  9. #Copyright (C): 2020 All rights reserved
  10. ###########################################################################
  11. ## ServiceAccount
  12. apiVersion: v1
  13. kind: ServiceAccount
  14. metadata:
  15. namespace: kube-system
  16. name: traefik-ingress-controller
  17. ---
  18. ## ClusterRole
  19. kind: ClusterRole
  20. apiVersion: rbac.authorization.k8s.io/v1beta1
  21. metadata:
  22. name: traefik-ingress-controller
  23. rules:
  24. - apiGroups: [""]
  25. resources: ["services","endpoints","secrets"]
  26. verbs: ["get","list","watch"]
  27. - apiGroups: ["extensions"]
  28. resources: ["ingresses"]
  29. verbs: ["get","list","watch"]
  30. - apiGroups: ["extensions"]
  31. resources: ["ingresses/status"]
  32. verbs: ["update"]
  33. - apiGroups: ["traefik.containo.us"]
  34. resources: ["middlewares"]
  35. verbs: ["get","list","watch"]
  36. - apiGroups: ["traefik.containo.us"]
  37. resources: ["ingressroutes"]
  38. verbs: ["get","list","watch"]
  39. - apiGroups: ["traefik.containo.us"]
  40. resources: ["ingressroutetcps"]
  41. verbs: ["get","list","watch"]
  42. - apiGroups: ["traefik.containo.us"]
  43. resources: ["tlsoptions"]
  44. verbs: ["get","list","watch"]
  45. - apiGroups: ["traefik.containo.us"]
  46. resources: ["traefikservices"]
  47. verbs: ["get","list","watch"]
  48. ---
  49. ## ClusterRoleBinding
  50. kind: ClusterRoleBinding
  51. apiVersion: rbac.authorization.k8s.io/v1beta1
  52. metadata:
  53. name: traefik-ingress-controller
  54. roleRef:
  55. apiGroup: rbac.authorization.k8s.io
  56. kind: ClusterRole
  57. name: traefik-ingress-controller
  58. subjects:
  59. - kind: ServiceAccount
  60. name: traefik-ingress-controller
  61. namespace: kube-system

创建traefik配置文件

由于 Traefik 配置很多,通过 CLI 定义不是很方便,一般时候选择将其配置选项放到配置文件中,然后存入 ConfigMap,将其挂入 traefik 中。

traefik-config.yaml

  1. # cat traefik-config.yaml
  2. ##########################################################################
  3. #Author: zisefeizhu
  4. #QQ: 2********0
  5. #Date: 2020-03-16
  6. #FileName: traefik-config.yaml
  7. #URL: https://www.cnblogs.com/zisefeizhu/
  8. #Description: The test script
  9. #Copyright (C): 2020 All rights reserved
  10. ###########################################################################
  11. kind: ConfigMap
  12. apiVersion: v1
  13. metadata:
  14. name: traefik-config
  15. namespace: kube-system
  16. data:
  17. traefik.yaml: |-
  18. ping: "" ## 启用 Ping
  19. serversTransport:
  20. insecureSkipVerify: true ## Traefik 忽略验证代理服务的 TLS 证书
  21. api:
  22. insecure: true ## 允许 HTTP 方式访问 API
  23. dashboard: true ## 启用 Dashboard
  24. debug: false ## 启用 Debug 调试模式
  25. metrics:
  26. prometheus: "" ## 配置 Prometheus 监控指标数据,并使用默认配置
  27. entryPoints:
  28. web:
  29. address: ":80" ## 配置 80 端口,并设置入口名称为 web
  30. websecure:
  31. address: ":443" ## 配置 443 端口,并设置入口名称为 websecure
  32. redis:
  33. address: ":663"
  34. providers:
  35. kubernetesCRD: "" ## 启用 Kubernetes CRD 方式来配置路由规则
  36. kubernetesIngress: "" ## 启动 Kubernetes Ingress 方式来配置路由规则
  37. log:
  38. filePath: "" ## 设置调试日志文件存储路径,如果为空则输出到控制台
  39. level: error ## 设置调试日志级别
  40. format: json ## 设置调试日志格式
  41. accessLog:
  42. filePath: "" ## 设置访问日志文件存储路径,如果为空则输出到控制台
  43. format: json ## 设置访问调试日志格式
  44. bufferingSize: 0 ## 设置访问日志缓存行数
  45. filters:
  46. #statusCodes: ["200"] ## 设置只保留指定状态码范围内的访问日志
  47. retryAttempts: true ## 设置代理访问重试失败时,保留访问日志
  48. minDuration: 20 ## 设置保留请求时间超过指定持续时间的访问日志
  49. fields: ## 设置访问日志中的字段是否保留(keep 保留、drop 不保留)
  50. defaultMode: keep ## 设置默认保留访问日志字段
  51. names: ## 针对访问日志特别字段特别配置保留模式
  52. ClientUsername: drop
  53. headers: ## 设置 Header 中字段是否保留
  54. defaultMode: keep ## 设置默认保留 Header 中字段
  55. names: ## 针对 Header 中特别字段特别配置保留模式
  56. User-Agent: redact
  57. Authorization: drop
  58. Content-Type: keep

关键配置部分

  1. entryPoints:
  2. web:
  3. address: ":80" ## 配置 80 端口,并设置入口名称为 web
  4. websecure:
  5. address: ":443" ## 配置 443 端口,并设置入口名称为 websecure
  6. redis:
  7. address: ":6379" ## 配置 6379 端口,并设置入口名称为 redis

部署traefik

采用daemonset这种方式部署Traefik,所以需要提前给节点设置 Label,这样当程序部署时 Pod 会自动调度到设置 Label 的节点上

traefik-deploy.yaml

  1. kubectl label nodes 20.0.0.202 IngressProxy=true
  2. #cat traefik-deploy.yaml
  3. ##########################################################################
  4. #Author: zisefeizhu
  5. #QQ: 2********0
  6. #Date: 2020-03-16
  7. #FileName: traefik-deploy.yaml
  8. #URL: https://www.cnblogs.com/zisefeizhu/
  9. #Description: The test script
  10. #Copyright (C): 2020 All rights reserved
  11. ###########################################################################
  12. apiVersion: v1
  13. kind: Service
  14. metadata:
  15. name: traefik
  16. namespace: kube-system
  17. spec:
  18. type: NodePort
  19. ports:
  20. - name: web
  21. port: 80
  22. - name: websecure
  23. port: 443
  24. - name: admin
  25. port: 8080
  26. - name: redis
  27. port: 6379
  28. selector:
  29. app: traefik
  30. ---
  31. apiVersion: apps/v1
  32. kind: DaemonSet
  33. metadata:
  34. name: traefik-ingress-controller
  35. namespace: kube-system
  36. labels:
  37. app: traefik
  38. spec:
  39. selector:
  40. matchLabels:
  41. app: traefik
  42. template:
  43. metadata:
  44. name: traefik
  45. labels:
  46. app: traefik
  47. spec:
  48. serviceAccountName: traefik-ingress-controller
  49. terminationGracePeriodSeconds: 1
  50. containers:
  51. - image: traefik:v2.1.2
  52. name: traefik-ingress-lb
  53. ports:
  54. - name: web
  55. containerPort: 80
  56. hostPort: 80 ## 将容器端口绑定所在服务器的 80 端口
  57. - name: websecure
  58. containerPort: 443
  59. hostPort: 443 ## 将容器端口绑定所在服务器的 443 端口
  60. - name: redis
  61. containerPort: 6379
  62. hostPort: 6379
  63. - name: admin
  64. containerPort: 8080 ## Traefik Dashboard 端口
  65. resources:
  66. limits:
  67. cpu: 2000m
  68. memory: 1024Mi
  69. requests:
  70. cpu: 1000m
  71. memory: 1024Mi
  72. securityContext:
  73. capabilities:
  74. drop:
  75. - ALL
  76. add:
  77. - NET_BIND_SERVICE
  78. args:
  79. - --configfile=/config/traefik.yaml
  80. volumeMounts:
  81. - mountPath: "/config"
  82. name: "config"
  83. volumes:
  84. - name: config
  85. configMap:
  86. name: traefik-config
  87. tolerations: ## 设置容忍所有污点,防止节点被设置污点
  88. - operator: "Exists"
  89. nodeSelector: ## 设置node筛选器,在特定label的节点上启动
  90. IngressProxy: "true"

Traefik2.1 应用已经部署完成,但是想让外部访问 Kubernetes 内部服务,还需要配置路由规则,这里开启了 Traefik Dashboard 配置,所以首先配置 Traefik Dashboard 看板的路由规则,使外部能够访问 Traefik Dashboard。

部署traefik dashboard

  1. # cat traefik-dashboard-route.yaml
  2. ##########################################################################
  3. #Author: zisefeizhu
  4. #QQ: 2********0
  5. #Date: 2020-03-16
  6. #FileName: traefik-dashboard-route.yaml
  7. #URL: https://www.cnblogs.com/zisefeizhu/
  8. #Description: The test script
  9. #Copyright (C): 2020 All rights reserved
  10. ###########################################################################
  11. apiVersion: traefik.containo.us/v1alpha1
  12. kind: IngressRoute
  13. metadata:
  14. name: traefik-dashboard-route
  15. namespace: kube-system
  16. spec:
  17. entryPoints:
  18. - web
  19. routes:
  20. - match: Host(`traefik.linux.com`)
  21. kind: Rule
  22. services:
  23. - name: traefik
  24. port: 8080

本地host解析



其实到这里 已经能基本看出traefik是如何进行代理的了

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: traefik
  5. namespace: kube-system
  6. spec:
  7. type: NodePort
  8. ports:
  9. - name: web
  10. port: 80
  11. - name: websecure
  12. port: 443
  13. - name: admin
  14. port: 8080
  15. selector:
  16. app: traefik
  17. apiVersion: traefik.containo.us/v1alpha1
  18. kind: IngressRoute
  19. metadata:
  20. name: traefik-dashboard-route
  21. namespace: kube-system
  22. spec:
  23. entryPoints:
  24. - web //入口
  25. routes:
  26. - match: Host(`traefik.linux.com`) //外部访问域名
  27. kind: Rule
  28. services:
  29. - name: traefik //service.selector
  30. port: 8080 //service.ports

代理一个deployment pod

问题要求是: 部署一个app 用delployment 发布 然后service 然后 代理

​ 端口是3009

那这里就用个应用实验吧 这里我随便用个prometheus 应用吧

  1. [root@bs-k8s-master01 prometheus]# cat prometheus-cm.yaml
  2. ##########################################################################
  3. #Author: zisefeizhu
  4. #QQ: 2********0
  5. #Date: 2020-03-20
  6. #FileName: prometheus-cm.yaml
  7. #URL: https://www.cnblogs.com/zisefeizhu/
  8. #Description: The test script
  9. #Copyright (C): 2020 All rights reserved
  10. ###########################################################################
  11. apiVersion: v1
  12. kind: ConfigMap
  13. metadata:
  14. name: prometheus-config
  15. namespace: assembly
  16. data:
  17. prometheus.yml: |
  18. global:
  19. scrape_interval: 15s
  20. scrape_timeout: 15s
  21. alerting:
  22. alertmanagers:
  23. - static_configs:
  24. - targets: ["alertmanager-svc:9093"]
  25. rule_files:
  26. - /etc/prometheus/rules.yaml
  27. scrape_configs:
  28. - job_name: 'prometheus'
  29. static_configs:
  30. - targets: ['localhost:9090']
  31. - job_name: 'traefik'
  32. static_configs:
  33. - targets: ['traefik.kube-system.svc.cluster.local:8080']
  34. - job_name: "kubernetes-nodes"
  35. kubernetes_sd_configs:
  36. - role: node
  37. relabel_configs:
  38. - source_labels: [__address__]
  39. regex: '(.*):10250'
  40. replacement: '${1}:9100'
  41. target_label: __address__
  42. action: replace
  43. - action: labelmap
  44. regex: __meta_kubernetes_node_label_(.+)
  45. - job_name: 'kubernetes-kubelet'
  46. kubernetes_sd_configs:
  47. - role: node
  48. scheme: https
  49. tls_config:
  50. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  51. insecure_skip_verify: true
  52. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  53. relabel_configs:
  54. - action: labelmap
  55. regex: __meta_kubernetes_node_label_(.+)
  56. - job_name: "kubernetes-apiserver"
  57. kubernetes_sd_configs:
  58. - role: endpoints
  59. scheme: https
  60. tls_config:
  61. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  62. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  63. relabel_configs:
  64. - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
  65. action: keep
  66. regex: default;kubernetes;https
  67. - job_name: "kubernetes-scheduler"
  68. kubernetes_sd_configs:
  69. - role: endpoints
  70. - job_name: 'kubernetes-cadvisor'
  71. kubernetes_sd_configs:
  72. - role: node
  73. scheme: https
  74. tls_config:
  75. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  76. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  77. relabel_configs:
  78. - action: labelmap
  79. regex: __meta_kubernetes_node_label_(.+)
  80. - target_label: __address__
  81. replacement: kubernetes.default.svc:443
  82. - source_labels: [__meta_kubernetes_node_name]
  83. regex: (.+)
  84. target_label: __metrics_path__
  85. replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
  86. - job_name: 'kubernetes-service-endpoints'
  87. kubernetes_sd_configs:
  88. - role: endpoints
  89. relabel_configs:
  90. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
  91. action: keep
  92. regex: true
  93. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
  94. action: replace
  95. target_label: __scheme__
  96. regex: (https?)
  97. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
  98. action: replace
  99. target_label: __metrics_path__
  100. regex: (.+)
  101. - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
  102. action: replace
  103. target_label: __address__
  104. regex: ([^:]+)(?::\d+)?;(\d+)
  105. replacement: $1:$2
  106. - action: labelmap
  107. regex: __meta_kubernetes_service_label_(.+)
  108. - source_labels: [__meta_kubernetes_namespace]
  109. action: replace
  110. target_label: kubernetes_namespace
  111. - source_labels: [__meta_kubernetes_service_name]
  112. action: replace
  113. target_label: kubernetes_name
  114. rules.yaml: |
  115. groups:
  116. - name: test-rule
  117. rules:
  118. - alert: NodeMemoryUsage
  119. expr: (sum(node_memory_MemTotal_bytes) - sum(node_memory_MemFree_bytes + node_memory_Buffers_bytes+node_memory_Cached_bytes)) / sum(node_memory_MemTotal_bytes) * 100 > 5
  120. for: 2m
  121. labels:
  122. team: node
  123. annotations:
  124. summary: "{{$labels.instance}}: High Memory usage detected"
  125. description: "{{$labels.instance}}: Memory usage is above 80% (current value is: {{ $value }}"
  126. [root@bs-k8s-master01 prometheus]# cat prometheus-rbac.yaml
  127. ##########################################################################
  128. #Author: zisefeizhu
  129. #QQ: 2********0
  130. #Date: 2020-03-20
  131. #FileName: prometheus-rbac.yaml
  132. #URL: https://www.cnblogs.com/zisefeizhu/
  133. #Description: The test script
  134. #Copyright (C): 2020 All rights reserved
  135. ###########################################################################
  136. apiVersion: v1
  137. kind: ServiceAccount
  138. metadata:
  139. name: prometheus
  140. namespace: assembly
  141. ---
  142. apiVersion: rbac.authorization.k8s.io/v1
  143. kind: ClusterRole
  144. metadata:
  145. name: prometheus
  146. rules:
  147. - apiGroups:
  148. - ""
  149. resources:
  150. - nodes
  151. - services
  152. - endpoints
  153. - pods
  154. - nodes/proxy
  155. verbs:
  156. - get
  157. - list
  158. - watch
  159. - apiGroups:
  160. - ""
  161. resources:
  162. - configmaps
  163. - nodes/metrics
  164. verbs:
  165. - get
  166. - nonResourceURLs:
  167. - /metrics
  168. verbs:
  169. - get
  170. ---
  171. apiVersion: rbac.authorization.k8s.io/v1beta1
  172. kind: ClusterRoleBinding
  173. metadata:
  174. name: prometheus
  175. roleRef:
  176. apiGroup: rbac.authorization.k8s.io
  177. kind: ClusterRole
  178. name: prometheus
  179. subjects:
  180. - kind: ServiceAccount
  181. name: prometheus
  182. namespace: assembly
  183. # cat prometheus-deploy.yaml
  184. ##########################################################################
  185. #Author: zisefeizhu
  186. #QQ: 2********0
  187. #Date: 2020-03-20
  188. #FileName: prometheus-deploy.yaml
  189. #URL: https://www.cnblogs.com/zisefeizhu/
  190. #Description: The test script
  191. #Copyright (C): 2020 All rights reserved
  192. ###########################################################################
  193. apiVersion: apps/v1
  194. kind: Deployment
  195. metadata:
  196. name: prometheus
  197. namespace: assembly
  198. labels:
  199. app: prometheus
  200. spec:
  201. selector:
  202. matchLabels:
  203. app: prometheus
  204. template:
  205. metadata:
  206. labels:
  207. app: prometheus
  208. spec:
  209. imagePullSecrets:
  210. - name: k8s-harbor-login
  211. serviceAccountName: prometheus
  212. containers:
  213. - image: harbor.linux.com/prometheus/prometheus:v2.4.3
  214. name: prometheus
  215. command:
  216. - "/bin/prometheus"
  217. args:
  218. - "--config.file=/etc/prometheus/prometheus.yml"
  219. - "--storage.tsdb.path=/prometheus"
  220. - "--storage.tsdb.retention=24h"
  221. - "--web.enable-admin-api" # 控制对admin HTTP API的访问,其中包括删除时间序列等功能
  222. - "--web.enable-lifecycle" # 支持热更新,直接执行localhost:9090/-/reload立即生效
  223. ports:
  224. - containerPort: 9090
  225. protocol: TCP
  226. name: http
  227. volumeMounts:
  228. - mountPath: "/prometheus"
  229. subPath: prometheus
  230. name: data
  231. - mountPath: "/etc/prometheus"
  232. name: config-volume
  233. resources:
  234. requests:
  235. cpu: 100m
  236. memory: 512Mi
  237. limits:
  238. cpu: 100m
  239. memory: 512Mi
  240. securityContext:
  241. runAsUser: 0
  242. volumes:
  243. - name: data
  244. persistentVolumeClaim:
  245. claimName: prometheus-pvc
  246. - configMap:
  247. name: prometheus-config
  248. name: config-volume
  249. nodeSelector: ## 设置node筛选器,在特定label的节点上启动
  250. prometheus: "true"
  251. [root@bs-k8s-master01 prometheus]# cat prometheus-svc.yaml
  252. ##########################################################################
  253. #Author: zisefeizhu
  254. #QQ: 2********0
  255. #Date: 2020-03-20
  256. #FileName: prometheus-svc.yaml
  257. #URL: https://www.cnblogs.com/zisefeizhu/
  258. #Description: The test script
  259. #Copyright (C): 2020 All rights reserved
  260. ###########################################################################
  261. apiVersion: v1
  262. kind: Service
  263. metadata:
  264. name: prometheus
  265. namespace: assembly
  266. labels:
  267. app: prometheus
  268. spec:
  269. selector:
  270. app: prometheus
  271. type: NodePort
  272. ports:
  273. - name: web
  274. port: 9090
  275. targetPort: http
  276. [root@bs-k8s-master01 prometheus]# cat prometheus-ingressroute.yaml
  277. ##########################################################################
  278. #Author: zisefeizhu
  279. #QQ: 2********0
  280. #Date: 2020-03-20
  281. #FileName: prometheus-ingressroute.yaml
  282. #URL: https://www.cnblogs.com/zisefeizhu/
  283. #Description: The test script
  284. #Copyright (C): 2020 All rights reserved
  285. ###########################################################################
  286. apiVersion: traefik.containo.us/v1alpha1
  287. kind: IngressRoute
  288. metadata:
  289. name: prometheus
  290. namespace: assembly
  291. spec:
  292. entryPoints:
  293. - web
  294. routes:
  295. - match: Host(`prometheus.linux.com`)
  296. kind: Rule
  297. services:
  298. - name: prometheus
  299. port: 9090

验证

显然已经代理成功了 本地hosts解析



还是没得问题的

运行一个nodejs服务,先发布为deployment,然后创建service,让集群外可以访问的更多相关文章

  1. 【Azure微服务 Service Fabric 】使用az命令创建Service Fabric集群

    问题描述 在使用Service Fabric的快速入门文档: 将 Windows 容器部署到 Service Fabric. 其中在创建Service Fabric时候,示例代码中使用的是PowerS ...

  2. 庐山真面目之十微服务架构 Net Core 基于 Docker 容器部署 Nginx 集群

    庐山真面目之十微服务架构 Net Core 基于 Docker 容器部署 Nginx 集群 一.简介      前面的两篇文章,我们已经介绍了Net Core项目基于Docker容器部署在Linux服 ...

  3. k8s暴露集群内和集群外服务的方法

    集群内服务 一般 pod 都是根据 service 资源来进行集群内的暴露,因为 k8s 在 pod 启动前就已经给调度节点上的 pod 分配好 ip 地址了,因此我们并不能提前知道提供服务的 pod ...

  4. 【Azure微服务 Service Fabric 】因证书过期导致Service Fabric集群挂掉(升级无法完成,节点不可用)

    问题描述 创建Service Fabric时,证书在整个集群中是非常重要的部分,有着用户身份验证,节点之间通信,SF升级时的身份及授权认证等功能.如果证书过期则会导致节点受到影响集群无法正常工作. 当 ...

  5. 微服务之:从零搭建ocelot网关和consul集群

    介绍 微服务中有关键的几项技术,其中网关和服务服务发现,服务注册相辅相成. 首先解释几个本次教程中需要的术语 网关 Gateway(API GW / API 网关),顾名思义,是企业 IT 在系统边界 ...

  6. Spark运行模式_spark自带cluster manager的standalone cluster模式(集群)

    这种运行模式和"Spark自带Cluster Manager的Standalone Client模式(集群)"还是有很大的区别的.使用如下命令执行应用程序(前提是已经启动了spar ...

  7. Nginx(http协议代理 搭建虚拟主机 服务的反向代理 在反向代理中配置集群的负载均衡)

    Nginx 简介 Nginx (engine x) 是一个高性能的 HTTP 和反向代理服务.Nginx 是由伊戈尔·赛索耶夫为俄罗斯访问量第二的 Rambler.ru 站点(俄文:Рамблер)开 ...

  8. 【微服务架构】SpringCloud之Eureka(注册中心集群篇)(三)

    上一篇讲解了spring注册中心(eureka),但是存在一个单点故障的问题,一个注册中心远远无法满足实际的生产环境,那么我们需要多个注册中心进行集群,达到真正的高可用.今天我们实战来搭建一个Eure ...

  9. Spark运行模式_基于YARN的Resource Manager的Client模式(集群)

    现在越来越多的场景,都是Spark跑在Hadoop集群中,所以为了做到资源能够均衡调度,会使用YARN来做为Spark的Cluster Manager,来为Spark的应用程序分配资源. 在执行Spa ...

随机推荐

  1. 【科创人独家】PingCAP黄东旭:想告诉图灵这个世界现在的样子

    创业是投己所好 科创人:作为技术圈内著名艺术青年,哪个瞬间会让您更开心,完成一段优美的代码或者乐谱?还是得到来自外界的欢呼与掌声? 黄东旭:在创业之前的很长一段时间里,完成一段代码.写完一首好曲子那一 ...

  2. 【分布式锁】07-Zookeeper实现分布式锁:Semaphore、读写锁实现原理

    前言 前面已经讲解了Zookeeper可重入锁的实现原理,自己对分布式锁也有了更深的认知. 我在公众号中发了一个疑问,相比于Redis来说,Zookeeper的实现方式要更好一些,即便Redis作者实 ...

  3. [阿里云-机器学习PAI快速入门与业务实战 ]课时1-机器学习背景知识以及业务架构介绍

    什么是机器学习? 机器学习指的是机器通过统计学算法,对大量的历史数据进行学习从而生成经验模型,利用经验模型指导业务. 目前机器学习主要在一下一些方面发挥作用: 营销类场景:商品推荐.用户群体画像.广告 ...

  4. Building Applications with Force.com and VisualForce(Dev401)( 九):Designing Applications for Multiple Users: Putting It All Together

    Module Objectives1.Apply profiles, organization wide defaults, role hierarchy and sharing to given a ...

  5. OpenCV-Python 光流 | 四十八

    目标 在本章中, 我们将了解光流的概念及其使用Lucas-Kanade方法的估计. 我们将使用cv.calcOpticalFlowPyrLK()之类的函数来跟踪视频中的特征点. 我们将使用cv.cal ...

  6. 巴什博弈 HDU-1846

    描述:一堆石子有 n 个 ,两个人开始轮流取,每人最多取m个,最少取1个,最后一个将石子取完的是赢家. 思路:对于先手来说,如果有(m+1)个石子,先手取 k 个,后手就可以取 m+1-k 个,所以有 ...

  7. 让vscode支持WePY框架 *.wpy

    WePY框架的.wpy 文件在微信开发者工具中无法打开,这里使用vscode 打开,并安装vetur 和vetur-wepy  插件即可

  8. coding++:解决Not allowed to load local resource错误-SpringBoot配置虚拟路径

    1.在SpringBoot里上传图片后返回了绝对路径,发现本地读取的环节上面出现了错误(Not allowed to load local resource),一开始用的是直接本地路径. 但是在页面上 ...

  9. 分布式配置中心Apollo

    1,什么是分布式配置中心 项目中配置文件比较繁杂,而且不同环境的不同配置修改相对频繁,每次发布都需要对应修改配置,如果配置出现错误,需要重新打包发布,时间成本较高,因此需要做统一的分布式注册中心,能做 ...

  10. Web安全认证

    一.HTTP Basic Auth 每次请求 API 时都提供用户的 username 和 password. Basic Auth 是配合 RESTful API 使用的最简单的认证方式,只需提供用 ...