09-部署配置kubedns插件
安装和配置 kubedns 插件
官方的yaml文件在:kubernetes/cluster/addons/dns
。
该插件直接使用kubernetes部署,官方的配置文件中包含以下镜像:
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
我这使用时速云上的镜像:
index.tenxcloud.com/jimmy/k8s-dns-kube-dns-amd64:1.14.1
index.tenxcloud.com/jimmy/k8s-dns-dnsmasq-nanny-amd64:1.14.1
index.tenxcloud.com/jimmy/k8s-dns-sidecar-amd64:1.14.1
以下yaml配置文件中使用的是时速云中的镜像。
kubedns-cm.yaml
kubedns-sa.yaml
kubedns-controller.yaml
kubedns-svc.yaml
已经修改好的 yaml 文件见:dns
系统预定义的 RoleBinding
预定义的 RoleBinding system:kube-dns
将 kube-system 命名空间的 kube-dns
ServiceAccount 与 system:kube-dns
Role 绑定, 该 Role 具有访问 kube-apiserver DNS 相关 API 的权限;
$ kubectl get clusterrolebindings system:kube-dns -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2017-04-11T11:20:42Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-dns
resourceVersion: "58"
selfLink: /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingssystem%3Akube-dns
uid: e61f4d92-1ea8-11e7-8cd7-f4e9d49f8ed0
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-dns
subjects:
- kind: ServiceAccount
name: kube-dns
namespace: kube-system
kubedns-controller.yaml
中定义的 Pods 时使用了 kubedns-sa.yaml
文件定义的 kube-dns
ServiceAccount,所以具有访问 kube-apiserver DNS 相关 API 的权限。
配置 kube-dns ServiceAccount
无需修改。
配置 kube-dns
服务
# cat kubedns-svc.yaml
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- spec.clusterIP = 10.254.0.2,即明确指定了 kube-dns Service IP,这个 IP 需要和 kubelet 的
--cluster-dns
参数值一致;
配置 kube-dns
Deployment
# cat kubedns-controller.yaml
# # Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.
# __MACHINE_GENERATED_WARNING__
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: index.tenxcloud.com/jimmy/k8s-dns-kube-dns-amd64:1.14.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
#__PILLAR__FEDERATIONS__DOMAIN__MAP__
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: index.tenxcloud.com/jimmy/k8s-dns-dnsmasq-nanny-amd64:1.14.1
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local./127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: index.tenxcloud.com/jimmy/k8s-dns-sidecar-amd64:1.14.1
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
- 主要也就更改image的地址,根据各自的镜像地址而更改
- 使用系统已经做了 RoleBinding 的
kube-dns
ServiceAccount,该账户具有访问 kube-apiserver DNS 相关 API 的权限;
执行所有定义文件
# pwd
/root/yaml/dns
# ls *.yaml
kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml
# kubectl create -f .
configmap "kube-dns" created
deployment "kube-dns" created
serviceaccount "kube-dns" created
service "kube-dns" created
# 使用kubectl get deployment -n kube-system查看deployment状态
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-dns 1 1 1 1 12m
#使用kubectl get pods --all-namespaces| grep kube-dns查看dns pods是否都正常启动
kube-system kube-dns-351402727-vcvpc 3/3 Running 0 10m
#使用kubectl get services --all-namespaces| grep kube-dns查看服务端口
kube-system kube-dns 10.254.0.2 <none> 53/UDP,53/TCP 14m
开始测试 kubedns 功能
新建一个nginx Deployment
# cat my-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: index.tenxcloud.com/docker_library/nginx:1.9.0
ports:
- containerPort: 80
# kubectl create -f my-nginx.yaml
deployment "my-nginx" created
# kubectl get pods --all-namespaces|grep my-nginx
default my-nginx-925637600-4sr5g 1/1 Running 0 19m
default my-nginx-925637600-6f9w7 1/1 Running 0 19m
Export 该 Deployment, 生成 my-nginx
服务
# kubectl expose deploy my-nginx
# kubectl get services --all-namespaces |grep my-nginx
default my-nginx 10.254.101.236 <none> 80/TCP 10s
创建另一个 Pod,查看 /etc/resolv.conf
是否包含 kubelet
配置的 --cluster-dns
和 --cluster-domain
,是否能够将服务 my-nginx
解析到 Cluster IP 10.254.101.236
# cat nginxnew.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxnew
spec:
replicas: 2
template:
metadata:
labels:
run: nginxnew
spec:
containers:
- name: nginxnew
image: index.tenxcloud.com/docker_library/nginx:1.9.0
ports:
- containerPort: 80
# kubectl create -f nginxnew.yaml
deployment "nginxnew" created
# kubectl get pods --all-namespaces|grep nginxnew
default nginxnew-248912974-bwqrx 1/1 Running 0 4m
default nginxnew-248912974-c881p 1/1 Running 0 4m
# kubectl exec nginxnew-248912974-bwqrx -i -t -- /bin/bash
root@nginxnew-248912974-bwqrx:/# cat /etc/resolv.conf
nameserver 10.254.0.2
search default.svc.cluster.local. svc.cluster.local. cluster.local.
options ndots:5
root@nginxnew-248912974-bwqrx:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (10.254.101.236): 56 data bytes
^C--- my-nginx.default.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
root@nginxnew-248912974-bwqrx:/# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.254.0.1): 56 data bytes
^C--- kubernetes.default.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
root@nginxnew-248912974-bwqrx:/# ping kube-dns.kube-system.svc.cluster.local
PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 56 data bytes
^C--- kube-dns.kube-system.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
从结果来看,service名称可以正常解析到对应ip
09-部署配置kubedns插件的更多相关文章
- 二进制部署K8S-3核心插件部署
二进制部署K8S-3核心插件部署 5.1. CNI网络插件 kubernetes设计了网络模型,但是pod之间通信的具体实现交给了CNI往插件.常用的CNI网络插件有:Flannel .Calico. ...
- Jenkins部署配置简介
前段时间研究了一下自动化测试,因而接触到了Jenkins,今天有时间进行一下Jenkins部署配置相关知识的总结分享 前言:由于本次只是实验性研究,采用Windows环境,因此Jenkins可以通过下 ...
- Logstash+Kibana部署配置
Logstash是一个接收,处理,转发日志的工具.支持系统日志,webserver日志,错误日志,应用日志,总之包括所有可以抛出来的日志类型. 典型的使用场景下(ELK): 用Elasticsearc ...
- [原]Jenkins(三)---Jenkins初始配置和插件配置
/** * lihaibo * 文章内容都是根据自己工作情况实践得出. *版权声明:本博客欢迎转发,但请保留原作者信息! http://www.cnblogs.com/horizonli/p/5331 ...
- [Linux实用工具]munin-node插件配置和插件编写
前面介绍了2篇munin使用的相关文章: [Linux实用工具]Linux监控工具munin的安装和配置 [Linux实用工具]Linux监控工具munin的展示(Nginx) 这次介绍一下mun ...
- Linux系统中ElasticSearch搜索引擎安装配置Head插件
近几篇ElasticSearch系列: 1.阿里云服务器Linux系统安装配置ElasticSearch搜索引擎 2.Linux系统中ElasticSearch搜索引擎安装配置Head插件 3.Ela ...
- tomcat的热部署配置
1.什么是tomcat热部署? 所谓的tomcat热部署,就是在不重启tomcat服务器的前提下,将自己的项目部署到tomcat服务器中,这种方式是非常方便的,也称之为“开发即用”,热部署分为手动热部 ...
- IntelliJ Idea14 创建Maven多模块项目,多继承,热部署配置总结(三)
pom.xml中repositories.pluginRepository的作用 pom.xml中repositories标签的作用是: 用来配置maven项目的远程仓库.示例如下: <repo ...
- (3)ElasticSearch在linux环境中安装与配置head插件
1.简介 ElasticSearch-Head跟Kibana一样也是一个针对ElasticSearch集群操作的API的可视化管理工具,它提供了集群管理.数据可视化.增删改查.查询语句等功能,最重要还 ...
随机推荐
- [leetcode]48. Rotate Image旋转图像
You are given an n x n 2D matrix representing an image. Rotate the image by 90 degrees (clockwise). ...
- js 面向对象的三大特性
一.封装 所谓封装的概念,是不希望暴露函数中属性或者方法的地址,使外界不能操作,但是可以暴露特有的公有接口,可以利用接口操作. function hello(){ var name='xiaoming ...
- Mybatis配置问题解决Invalid bound statement (not found)
首先这个异常的原因是系统根据Mapper类的方法名找不到对应的映射文件. 网上也搜索了到了类似的文章,一般可以从以下几个点排查: mapper.xml的namespace要写所映射接口的全称类名,而且 ...
- django by example 第四章 dashboard处html无法渲染问题
描述: 实现django by example 代码时,第四章 dashboard处html无法渲染问题. 此时报错,NoReverseMatch at /account/login/, Error ...
- cookie方法封装
将cookie封装主要是为了方便使用,可通过修改参数直接引用在其他需要的地方,不用重新写. 1.添加,删除,修改cookie /** * @param name name:cookie的name * ...
- vue中遇到的一个点击展开或收起并且改变背景颜色的问题。
<template> <div class="expense-center"> <div class="fl expense-left&qu ...
- 连接hive
bin/hiveserver2 nohup bin/hiveserver2 1>/var/log/hiveserver.log 2>/var/log/hiveserver.err & ...
- 学习Acegi应用到实际项目中(4)
此节介绍:ConcurrentSessionFilter. 在Acegi 1.x版本中,控制并发HttpSession和Remember-Me认证服务不能够同时启用,它们之间存在冲突问题. 在一些应用 ...
- 2、JavaScript 基础二 (从零学习JavaScript)
11.强制转换 强制转换主要指使用Number.String和Boolean三个构造函数,手动将各种类型的值,转换成数字.字符串或者布尔值. 1>Number强制转换 参数为原始类型值的转换规 ...
- Linux 根据PID找到相应应用程序的运行目录
1.找到运行程序的PID # ps aux | grep redis root pts/ S+ : : grep redis root ? Ssl Aug30 : redis-server *: # ...