原文链接:

https://jimmysong.io/kubernetes-handbook/cloud-native/play-with-kubernetes.html (在CentOS上部署kubernetes集群)

架构介绍(前期准备)

文档说明

本系列文档介绍使用二进制部署 kubernetes 集群的所有步骤,而不是使用 kubeadm 等自动化方式来部署集群,同时开启了集群的TLS安全认证,该安装步骤适用于所有bare metal环境、on-premise环境和公有云环境。

如果您想快速的在自己电脑的本地环境下使用虚拟机来搭建kubernetes集群,可以参考本地分布式开发环境搭建(使用Vagrant和Virtualbox、http://192.168.66.102/k8s/k8s-doc/blob/master/develop/using-vagrant-and-virtualbox-for-development.md)。

在部署的过程中,将详细列出各组件的启动参数,给出配置文件,详解它们的含义和可能遇到的问题。

部署完成后,你将理解系统各组件的交互原理,进而能快速解决实际问题。

所以本文档主要适合于那些有一定 kubernetes 基础,想通过一步步部署的方式来学习和了解系统配置、运行原理的人。

注:本文档中不包括docker和私有镜像仓库的安装,安装说明中使用的镜像来自 Google Cloud Platform,为了方便国内用户下载,我将其克隆并上传到了 时速云镜像市场,供大家免费下载。

欲下载最新版本的官方镜像请访问 Google 云平台容器注册表。

集群详情

•OS:CentOS Linux release 7.3. (Core) 3.10.-514.16..el7.x86_64
•Kubernetes 1.9.+(最低的版本要求是1.)
•Docker 1.12.(使用yum安装)
•Etcd 3.1.
•Flannel 0.7. vxlan或者host-gw 网络
•TLS 认证通信 (所有组件,如 etcd、kubernetes master 和 node)
•RBAC 授权
•kubelet TLS BootStrapping
•kubedns、dashboard、heapster(influxdb、grafana)、EFK(elasticsearch、fluentd、kibana) 集群插件
•私有docker镜像仓库harbor(请自行部署,harbor提供离线安装包,直接使用docker-compose启动即可)

环境说明

在下面的步骤中,我们将在三台CentOS系统的物理机上部署具有三个节点的kubernetes1..0集群。

角色分配如下:

镜像仓库: 192.168.55.33  (harbor: https://www.cnblogs.com/jicki/p/5737369.html)
Master:192.168.55.36 Node:192.168.55.36、192.168.55.37、192.168.55.38 注意:192.168..36这台主机master和node复用。所有生成证书、执行kubectl命令的操作都在这台节点上执行。一旦node加入到kubernetes集群之后就不需要再登陆node节点了。

步骤介绍

.创建 TLS 证书和秘钥
.创建kubeconfig 文件
.创建高可用etcd集群
.安装kubectl命令行工具
.部署master节点
.安装flannel网络插件
.部署node节点
.安装kubedns插件
.安装dashboard插件
.安装heapster插件
.安装EFK插件

1.创建 TLS 证书和秘钥

生成的 CA 证书和秘钥文件如下:
•ca-key.pem
•ca.pem
•kubernetes-key.pem
•kubernetes.pem
•kube-proxy.pem
•kube-proxy-key.pem
•admin.pem
•admin-key.pem

使用证书的组件如下:
•etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
•kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
•kubelet:使用 ca.pem;
•kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
•kubectl:使用 ca.pem、admin-key.pem、admin.pem;
•kube-controller-manager:使用 ca-key.pem、ca.pem

注意:以下操作都在 master 节点即 192.168.55.36 这台主机上执行,证书只需要创建一次即可,以后在向集群中添加新节点时只要将 /etc/kubernetes/ 目录下的证书拷贝到新节点上即可。

1.1安装 CFSSL

方式一:直接使用二进制源码包安装
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo export PATH=/usr/local/bin:$PATH 方式二:使用go命令安装 我们的系统中安装了Go1.7.5,使用以下命令安装更快捷:
$ go get -u github.com/cloudflare/cfssl/cmd/...
$ echo $GOPATH
/usr/local
$ls /usr/local/bin/cfssl*
cfssl cfssl-bundle cfssl-certinfo cfssljson cfssl-newkey cfssl-scan

1.2创建 CA (Certificate Authority)

1.2.1创建 CA 配置文件
-----------------------------------------------------------------------
mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
# 根据config.json文件的格式创建如下的ca-config.json文件
# 过期时间设置成了 87600h
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
-------------------------------------------------------------------------
字段说明
• ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
• signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
• server auth:表示client可以用该 CA 对server提供的证书进行验证;
• client auth:表示server可以用该CA对client提供的证书进行验证; 1.2.2创建 CA 证书签名请求
创建 ca-csr.json 文件,内容如下:
-------------------------------------------------------------------------
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
],
"ca": {
"expiry": "87600h"
}
}
EOF
-------------------------------------------------------------------------
字段说明
•"CN":Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
•"O":Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group); 1.2.3生成 CA 证书和私钥
-------------------------------------------------------------------------
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
-------------------------------------------------------------------------

1.3创建 kubernetes 证书

1.3.1创建 kubernetes 证书签名请求文件 kubernetes-csr.json:
-------------------------------------------------------------------------
cd /root/ssl/
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.55.33",
"192.168.55.36",
"192.168.55.37",
"192.168.55.38",
"172.16.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
-------------------------------------------------------------------------
字段说明
•如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,由于该证书后续被 etcd 集群和 kubernetes master 集群使用,所以上面分别指定了 etcd 集群、kubernetes master 集群的主机 IP 和 kubernetes 服务的服务 IP(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.254.0.1)。
•这是最小化安装的kubernetes集群,包括一个私有镜像仓库,三个节点的kubernetes集群,以上物理节点的IP也可以更换为主机名。
注意: hosts字段为空后继会出问题,一定把hosts字段配置上。 1.3.2生成 kubernetes 证书和私钥
-------------------------------------------------------------------------
方式一:
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
$ ls kubernetes*
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem 方式二:
或者直接在命令行上指定相关参数:
echo '{"CN":"kubernetes","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname="127.0.0.1,172.20.0.112,172.20.0.113,172.20.0.114,172.20.0.115,kubernetes,kubernetes.default" - | cfssljson -bare kubernetes
-------------------------------------------------------------------------

1.4创建 admin 证书

1.4.1创建 admin 证书签名请求文件 admin-csr.json:
-------------------------------------------------------------------------
cd /root/ssl/
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
-------------------------------------------------------------------------
字段说明
•后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
• kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
•O 指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限; 1.4.2注意:
这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group(具体参考 Kubernetes中的用户与身份认证授权中 X509 Client Certs 一段)。 在搭建完 kubernetes 集群后,我们可以通过命令: kubectl get clusterrolebinding cluster-admin -o yaml ,查看到 clusterrolebinding cluster-admin 的 subjects 的 kind 是 Group,name 是 system:masters。 roleRef 对象是 ClusterRole cluster-admin。 意思是凡是 system:masters Group 的 user 或者 serviceAccount 都拥有 cluster-admin 的角色。 因此我们在使用 kubectl 命令时候,才拥有整个集群的管理权限。可以使用 kubectl get clusterrolebinding cluster-admin -o yaml 来查看。
-------------------------------------------------------------------------
$ kubectl get clusterrolebinding cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: --11T11::42Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: ""
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: e61b97b2-1ea8-11e7-8cd7-f4e9d49f8ed0
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
------------------------------------------------------------------------- 1.4.3生成 admin 证书和私钥:
-------------------------------------------------------------------------
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
-------------------------------------------------------------------------

1.5创建 kube-proxy 证书

1.5.1创建 kube-proxy 证书签名请求文件 kube-proxy-csr.json:
-------------------------------------------------------------------------
cd /root/ssl/
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
-------------------------------------------------------------------------
字段说明:
•CN 指定该证书的 User 为 system:kube-proxy;
• kube-apiserver 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限; 1.5.2生成 kube-proxy 客户端证书和私钥
-------------------------------------------------------------------------
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
$ ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
-------------------------------------------------------------------------

1.6校验证书

使用 opsnssl 命令
$ openssl x509 -noout -text -in kubernetes.pem
...
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=Kubernetes
Validity
Not Before: Apr :: GMT
Not After : Apr :: GMT
Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
...
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
DD::::::A9::::3A:0E:D7::DB::F8:6C:E0:E0
X509v3 Authority Key Identifier:
keyid:::3B::BD:::::AF:A0:::F6::::::CD X509v3 Subject Alternative Name:
DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address:127.0.0.1, IP Address:172.20.0.112, IP Address:172.20.0.113, IP Address:172.20.0.114, IP Address:172.20.0.115, IP Address:10.254.0.1
...
•确认 Issuer 字段的内容和 ca-csr.json 一致;
•确认 Subject 字段的内容和 kubernetes-csr.json 一致;
•确认 X509v3 Subject Alternative Name 字段的内容和 kubernetes-csr.json 一致;
•确认 X509v3 Key Usage、Extended Key Usage 字段的内容和 ca-config.json 中 kubernetes profile 一致; 使用 cfssl-certinfo 命令
$ cfssl-certinfo -cert kubernetes.pem
...
{
"subject": {
"common_name": "kubernetes",
"country": "CN",
"organization": "k8s",
"organizational_unit": "System",
"locality": "BeiJing",
"province": "BeiJing",
"names": [
"CN",
"BeiJing",
"BeiJing",
"k8s",
"System",
"kubernetes"
]
},
"issuer": {
"common_name": "Kubernetes",
"country": "CN",
"organization": "k8s",
"organizational_unit": "System",
"locality": "BeiJing",
"province": "BeiJing",
"names": [
"CN",
"BeiJing",
"BeiJing",
"k8s",
"System",
"Kubernetes"
]
},
"serial_number": "",
"sans": [
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"127.0.0.1",
"10.64.3.7",
"10.254.0.1"
],
"not_before": "2017-04-05T05:36:00Z",
"not_after": "2018-04-05T05:36:00Z",
"sigalg": "SHA256WithRSA",
...

1.7分发证书

将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用;
-------------------------------------------------------------------------
mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
-------------------------------------------------------------------------

2.创建kubeconfig文件

master节点上执行以下操作

2.1安装kubectl命令行工具

mkdir -p /opt/k8s/bin
wget https://dl.k8s.io/v1.9.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /opt/k8s/bin/

2.2创建 kubectl     kubeconfig 文件

------------------------------------------------------------
export KUBE_APISERVER="https://192.168.55.36:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} # 设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem # 设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin # 设置默认上下文
kubectl config use-context kubernetes
------------------------------------------------------------
说明:
•admin.pem 证书 OU 字段值为 system:masters,kube-apiserver 预定义的 RoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 相关 API 的权限;
•生成的 kubeconfig 被保存到 ~/.kube/config 文件; 注意:~/.kube/config文件拥有对该集群的最高权限,请妥善保管。

2.3创建 TLS     Bootstrapping     Token

Token可以是任意的包含128 bit的字符串,可以使用安全的随机数发生器生成。
------------------------------------------------------------ export BOOTSTRAP_TOKEN=$(head -c /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,,"system:kubelet-bootstrap"
EOF cp token.csv /etc/kubernetes/
------------------------------------------------------------
注意:在进行后续操作前请检查 token.csv 文件,确认其中的 ${BOOTSTRAP_TOKEN} 环境变量已经被真实的值替换。
BOOTSTRAP_TOKEN 将被写入到 kube-apiserver 使用的 token.csv 文件和 kubelet 使用的 bootstrap.kubeconfig 文件,如果后续重新生成了 BOOTSTRAP_TOKEN,则需要:
.更新 token.csv 文件,分发到所有机器 (master 和 node)的 /etc/kubernetes/ 目录下,分发到node节点上非必需;【步骤2.3】
.重新生成 bootstrap.kubeconfig 文件,分发到所有 node 机器的 /etc/kubernetes/ 目录下; 【步骤2.4】
.重启 kube-apiserver 和 kubelet 进程;
.重新 approve kubelet 的 csr 请求;[步骤6.3.7]

2.4创建 kubelet     bootstrapping      kubeconfig 文件

cd /etc/kubernetes
export KUBE_APISERVER="https://192.168.55.36:6443" # 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig # 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig # 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
------------------------------------------------------------
说明:
•--embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
•设置客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;

2.5创建  kube-proxy      kubeconfig    文件

------------------------------------------------------------
export KUBE_APISERVER="https://192.168.55.36:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig # 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig # 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig # 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
------------------------------------------------------------
说明:
•设置集群参数和客户端认证参数时 --embed-certs 都为 true,这会将 certificate-authority、client-certificate 和 client-key 指向的证书文件内容写入到生成的 kube-proxy.kubeconfig 文件中;
• kube-proxy.pem 证书中 CN 为 system:kube-proxy,kube-apiserver 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;

2.6分发      kubeconfig    文件

将两个 kubeconfig 文件分发到所有 Node 机器的 /etc/kubernetes/ 目录
------------------------------------------------------------
cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/
------------------------------------------------------------

3.创建高可用 etcd 集群

3.1环境

-------------------------------------------------------------
3节点:
192.168.55.36
192.168.55.37
192.168.55.38
-------------------------------------------------------------

3.2 TLS   认证文件

-------------------------------------------------------------
需要为 etcd 集群创建加密通信的 TLS 证书,这里复用以前创建的 kubernetes 证书
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
-------------------------------------------------------------
注意:•kubernetes 证书的 hosts 字段列表中包含上面三台机器的 IP,否则后续证书校验会失败;

3.3  下载etcd二进制文件和安装  

-------------------------------------------------------------
方式一:
mkdir -p /opt/etcd/bin
wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz
tar -xvf etcd-v3.1.5-linux-amd64.tar.gz
mv etcd-v3.1.5-linux-amd64/etcd* /opt/etcd/bin/ 方式二:
yum install etcd
若使用yum安装,默认etcd命令将在/usr/bin目录下,注意修改下面的etcd.service文件中的启动命令地址为/usr/bin/etcd。
-------------------------------------------------------------

3.4创建  etcd 的 systemd   unit 文件

在/etc/systemd/system/目录下创建文件etcd.service,内容如下。注意替换IP地址为你自己的etcd集群的主机IP
vim /etc/systemd/system/etcd.service
-------------------------------------------------------------
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster infra1=https://192.168.55.36:2380,infra2=https://192.168.55.37:2380,infra3=https://192.168.55.38:2380 \
--initial-cluster-state new \
--data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=
LimitNOFILE= [Install]
WantedBy=multi-user.target
-------------------------------------------------------------
说明:
•指定 etcd 的工作目录为 /var/lib/etcd,数据目录为 /var/lib/etcd,需在启动服务前创建这个目录,否则启动服务的时候会报错“Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory”;
•为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
•创建 kubernetes.pem 证书时使用的 kubernetes-csr.json 文件的 hosts 字段包含所有 etcd 节点的IP,否则证书校验会出错;
• --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;

3.5 环境变量配置文件/etc/etcd/etcd.conf

-------------------------------------------------------------
# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.55.36:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.55.36:2379" #[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.55.36:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.55.36:2379"
-------------------------------------------------------------
说明:
这是192.168.55.36节点的配置,其他两个etcd节点只要将上面的IP地址改成相应节点的IP地址即可。ETCD_NAME换成对应节点的infra1//。

3.6启动 etcd 服务

-------------------------------------------------------------
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
-------------------------------------------------------------
说明:
在所有的 kubernetes master、nodes 节点重复上面的步骤,直到所有机器的 etcd 服务都已启动。
注意:
如果日志中出现连接异常信息,请确认所有节点防火墙是否开放2379,2380端口

3.7验证服务

在任一 kubernetes master 机器上执行如下命令:
-------------------------------------------------------------
$ etcdctl \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
cluster-health
-- ::09.082250 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
-- ::09.083681 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 9a2ec640d25672e5 is healthy: got healthy result from https://172.20.0.115:2379
member bc6f27ae3be34308 is healthy: got healthy result from https://172.20.0.114:2379
member e5c92ea26c4edba0 is healthy: got healthy result from https://172.20.0.113:2379
cluster is healthy
-------------------------------------------------------------
说明:
结果最后一行为 cluster is healthy 时表示集群服务正常。

4.部署master节点

4.1 master介绍

-------------------------------------------------------------
kubernetes master 节点包含的组件:
•kube-apiserver
•kube-scheduler
•kube-controller-manager 目前这三个组件需要部署在同一台机器上。
• kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能紧密相关;
•同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader; 注:
•暂时未实现master节点的高可用
•master节点上没有部署flannel网络插件,如果想要在master节点上也能访问ClusterIP,请参考下一节部署node节点中的配置Flanneld部分。 -------------------------------------------------------------

4.2 TLS 证书文件

-------------------------------------------------------------
以下pem证书文件我们在创建TLS证书和秘钥这一步中已经创建过了,token.csv文件在创建kubeconfig文件的时候创建。我们再检查一下。
$ ls /etc/kubernetes/ssl
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem
$ /etc/kubernetes/token.csv
-------------------------------------------------------------

4.3 下载k8s 1.9.0二进制文件

-------------------------------------------------------------
mkdir -p /opt/k8s/bin
wget https://dl.k8s.io/v1.0.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /opt/k8s/bin
-------------------------------------------------------------

4.4 配置和启动 kube-apiserver

4.4.1 创建 kube-apiserver的service配置文件
service配置文件/etc/systemd/system/kube-apiserver.service内容:
-------------------------------------------------------------
[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE= [Install]
WantedBy=multi-user.target
------------------------------------------------------------- 4.4.2 /etc/kubernetes/config文件的内容为:
-------------------------------------------------------------
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://test-001.jimmysong.io:8080"
KUBE_MASTER="--master=http://192.168.55.36:8080" -------------------------------------------------------------
说明: 该配置文件同时被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。 4.4.3 apiserver配置文件/etc/kubernetes/apiserver内容为:
-------------------------------------------------------------
###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io"
KUBE_API_ADDRESS="--advertise-address=192.168.55.36 --bind-address=192.168.55.36 --insecure-bind-address=192.168.55.36"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.16.0.1/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"
-------------------------------------------------------------
说明:
•--experimental-bootstrap-token-auth Bootstrap Token Authentication在1.9版本已经变成了正式feature,参数名称改为--enable-bootstrap-token-auth
•如果中途修改过--service-cluster-ip-range地址,则必须将default命名空间的kubernetes的service给删除,使用命令:kubectl delete service kubernetes,然后系统会自动用新的ip重建这个service,不然apiserver的log有报错the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/; please recreate
• --authorization-mode=RBAC 指定在安全端口使用 RBAC 授权模式,拒绝未通过授权的请求;
•kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通信;
•kubelet、kube-proxy、kubectl 部署在其它 Node 节点上,如果通过安全端口访问 kube-apiserver,则必须先通过 TLS 证书认证,再通过 RBAC 授权;
•kube-proxy、kubectl 通过在使用的证书里指定相关的 User、Group 来达到通过 RBAC 授权的目的;
•如果使用了 kubelet TLS Boostrap 机制,则不能再指定 --kubelet-certificate-authority、--kubelet-client-certificate 和 --kubelet-client-key 选项,否则后续 kube-apiserver 校验 kubelet 证书时出现 ”x509: certificate signed by unknown authority“ 错误;
• --admission-control 值必须包含 ServiceAccount;
• --bind-address 不能为 127.0.0.1;
• runtime-config配置为rbac.authorization.k8s.io/v1beta1,表示运行时的apiVersion;
• --service-cluster-ip-range 指定 Service Cluster IP 地址段,该地址段不能路由可达;
•缺省情况下 kubernetes 对象保存在 etcd /registry 路径下,可以通过 --etcd-prefix 参数进行调整;
•如果需要开通http的无认证的接口,则可以增加以下两个参数:--insecure-port=8080 --insecure-bind-address=127.0.0.1。注意,生产上不要绑定到非127.0.0.1的地址上 Kubernetes 1.9
•对于Kubernetes1.9集群,需要注意配置KUBE_API_ARGS环境变量中的--authorization-mode=Node,RBAC,增加对Node授权的模式,否则将无法注册node。
• --experimental-bootstrap-token-auth Bootstrap Token Authentication在kubernetes 1.9版本已经废弃,参数名称改为--enable-bootstrap-token-auth 4.4. 启动kube-apiserver
-------------------------------------------------------------
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
-------------------------------------------------------------

4.5 配置和启动 kube-controller-manager

4.5.1 创建 kube-controller-manager的serivce配置文件
文件路径/etc/systemd/system/kube-controller-manager.service
-------------------------------------------------------------
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
------------------------------------------------------------- 4.5.2 配置文件/etc/kubernetes/controller-manager
-------------------------------------------------------------
###
# The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=172.16.0.1/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"
-------------------------------------------------------------
说明:
• --service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致;
• --cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
• --root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件;
• --address 值必须为 127.0.0.1,kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器; 4.5.3 启动 kube-controller-manager
-------------------------------------------------------------
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
------------------------------------------------------------- 4.5.4 查看各个组件的状态;
-------------------------------------------------------------
$ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused
controller-manager Healthy ok
etcd- Healthy {"health": "true"}
etcd- Healthy {"health": "true"}
etcd- Healthy {"health": "true"}
-------------------------------------------------------------

4.6 配置和启动 kube-scheduler

4.6.1 创建 kube-scheduler的serivce配置文件
文件路径/etc/systemd/system/kube-scheduler.service
-------------------------------------------------------------
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
------------------------------------------------------------- 4.6.2 配置文件/etc/kubernetes/scheduler
-------------------------------------------------------------
###
# kubernetes scheduler config # default config should be adequate # Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
-------------------------------------------------------------
说明:
• --address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器; 4.6.3 启动 kube-scheduler -------------------------------------------------------------
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
-------------------------------------------------------------

4.7 验证 master 节点功能

-------------------------------------------------------------
$ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd- Healthy {"health": "true"}
etcd- Healthy {"health": "true"}
etcd- Healthy {"health": "true"}
-------------------------------------------------------------

5.安装flannel网络插件

5.1 flannel介绍和安装

-------------------------------------------------------------
所有的node节点都需要安装网络插件才能让所有的Pod加入到同一个局域网中,本文是安装flannel网络插件的参考文档。 建议直接使用yum安装flanneld,除非对版本有特殊需求,默认安装的是0..1版本的flannel。
注意: 0.7.1版本有问题
解决方法如下:
  替换高版本的flanneld可执行文件
  下载地址: wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
原文地址: https://www.cnblogs.com/cs-zh/p/7879658.html yum install -y flannel
-------------------------------------------------------------

5.2 service配置文件

/etc/systemd/system/flanneld.service
-------------------------------------------------------------
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service [Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX} \
$FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure [Install]
WantedBy=multi-user.target
RequiredBy=docker.service -------------------------------------------------------------

5.3 /etc/sysconfig/flanneld配置文件

-------------------------------------------------------------
# Flanneld configuration options # etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379" # etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"
-------------------------------------------------------------
说明:
如果是多网卡(例如vagrant环境),则需要在FLANNEL_OPTIONS中增加指定的外网出口的网卡,例如-iface=eth2

5.4 在etcd中创建网络配置

执行下面的命令为docker分配IP地址段
-------------------------------------------------------------
etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mkdir /kube-centos/network etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mk /kube-centos/network/config '{"Network":"10.200.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}'
-------------------------------------------------------------
说明:
如果你要使用host-gw模式,可以直接将vxlan改成host-gw即可。
注:参考网络和集群性能测试那节,最终我们使用的host-gw模式,关于flannel支持的backend模式见:https://github.com/coreos/flannel/blob/master/Documentation/backends.md。

5.5 启动flannel

-------------------------------------------------------------
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld
-------------------------------------------------------------

5.6 验证flannel

现在查询etcd中的内容可以看到
-------------------------------------------------------------
etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
ls /kube-centos/network/subnets
结果:
/kube-centos/network/subnets/10.200.75.0- etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
get /kube-centos/network/config 结果:
{"Network":"10.200.0.0/16","SubnetLen":,"Backend":{"Type":"host-gw"}} etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
get /kube-centos/network/subnets/10.200.75.0- 结果:
{"PublicIP":"192.168.55.36","BackendType":"host-gw"}
-------------------------------------------------------------
说明: 如果可以查看到以上内容证明flannel已经安装完成

6.部署node节点

6.1 部署前介绍

6.1. Kubernetes node节点包含如下组件:
-------------------------------------------------------------
Kubernetes node节点包含如下组件:
•Flanneld:参考我之前写的文章Kubernetes基于Flannel的网络配置,之前没有配置TLS,现在需要在service配置文件中增加TLS配置,安装过程请参考上一节安装flannel网络插件。
•Docker1.12.5:docker的安装很简单,这里也不说了,但是需要注意docker的配置。
•kubelet:直接用二进制文件安装
•kube-proxy:直接用二进制文件安装 注意:每台 node 上都需要安装 flannel,master 节点上可以不安装。 ------------------------------------------------------------- 6.1. 步骤简介
-------------------------------------------------------------
.确认在上一步中我们安装配置的网络插件flannel已启动且运行正常
.安装配置docker后启动
.安装配置kubelet、kube-proxy后启动
.验证
------------------------------------------------------------- 6.1. 目录和文件
我们再检查一下三个节点上,经过前几步操作我们已经创建了如下的证书和配置文件。
-------------------------------------------------------------
$ ls /etc/kubernetes/ssl
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem
$ ls /etc/kubernetes/
apiserver bootstrap.kubeconfig config controller-manager kubelet kube-proxy.kubeconfig proxy scheduler ssl token.csv
-------------------------------------------------------------

6.2安装配置docker

6.2.1 安装docker
yum install -y docker 6.2.2 配置docker
-------------------------------------------------------------
6.2.2.1
使用systemctl命令启动flanneld后,会自动执行./mk-docker-opts.sh -i生成如下两个文件环境变量文件:
/run/flannel/subnet.env
#内容如下:
FLANNEL_NETWORK=10.200.0.0/
FLANNEL_SUBNET=10.200.75.1/
FLANNEL_MTU=
FLANNEL_IPMASQ=false /run/flannel/docker
#内容如下:
DOCKER_OPT_BIP="--bip=10.200.75.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1500"
DOCKER_NETWORK_OPTIONS=" --bip=10.200.75.1/24 --ip-masq=true --mtu=1500" Docker将会读取这两个环境变量文件作为容器启动参数。 6.2.2.2
/etc/systemd/system/docker.service
#内容如下: 添加了两个 /run/flannel/docker /run/flannel/subnet.env
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target rhel-push-plugin.socket registries.service
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer [Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/run/flannel/docker
EnvironmentFile=-/run/flannel/subnet.env
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
--init-path=/usr/libexec/docker/docker-init-current \
--seccomp-profile=/etc/docker/seccomp.json \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY \
$REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=
LimitNPROC=
LimitCORE=infinity
TimeoutStartSec=
Restart=on-abnormal
KillMode=process [Install]
WantedBy=multi-user.target 6.2.2.2-1 docker加速

/etc/sysconfig/docker
  更改OPTIONS的内容设置为:

OPTIONS='--selinux-enabled=false --insecure-registry daocloud.io'

6.2.2.3
启动docker
systemctl daemon-reload
systemctl start docker
systemctl enable docker
systemctl status docker ps -ef | grep docker #查看进程 可以看到有 --bip=10.200.75.1/ 这样的参数 6.2.2.4
重启了docker后还要重启kubelet,这时又遇到问题,kubelet启动失败。报错:
Mar :: test-.jimmysong.io kubelet[]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" 解决:
这是kubelet与docker的cgroup driver不一致导致的,/etc/kubernetes/kubelet配置里:有个—cgroup-driver参数可以指定为"cgroupfs"或者“systemd”。
配置docker的service配置文件/etc/systemd/system/docker.service,设置ExecStart中的--exec-opt native.cgroupdriver=systemd。 -------------------------------------------------------------

6.3 安装和配置kubelet

6.3.1 创建kubelet向 kube-apiserver发送请求权限(master上执行)
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):
-------------------------------------------------------------
cd /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
-------------------------------------------------------------
说明:
--user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用户名,同时也写入了 /etc/kubernetes/bootstrap.kubeconfig 文件; 6.3.2 分发配置文件
将两个 kubeconfig 文件分发到所有 Node 机器的 /etc/kubernetes/ 目录
------------------------------------------------------------
cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/
------------------------------------------------------------ 6.3.3 下载最新的kubelet和kube-proxy二进制文件
注意请下载对应的Kubernetes版本的安装包。
------------------------------------------------------------
mkdir -p /opt/k8s/bin
wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
cp -r ./server/bin/{kube-proxy,kubelet} /opt/k8s/bin
------------------------------------------------------------ 6.3.4 创建kubelet的service配置文件
文件位置/etc/systemd/system/kubelet.service
------------------------------------------------------------
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service [Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/opt/k8s/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure [Install]
WantedBy=multi-user.target
------------------------------------------------------------
kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改为你的每台node节点的IP地址。 注意:在启动kubelet之前,需要先手动创建/var/lib/kubelet目录。 6.3.5 kubelet的配置文件/etc/kubernetes/kubelet
------------------------------------------------------------
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.55.36"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.55.36"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
#KUBELET_API_SERVER="--api-servers=http://192.168.55.36:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=index.tenxcloud.com/jimmy/pod-infrastructure:rhel7"
#
## Add your own!
KUBELET_ARGS="--fail-swap-on=false --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --cgroup-driver=systemd --cluster-dns=172.16.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --logtostderr false --log-dir /var/log/kubernetes --v 2"
------------------------------------------------------------
说明:
•对于kuberentes1.9集群中的kubelet配置,取消了KUBELET_API_SERVER的配置,而改用kubeconfig文件来定义master地址,所以请注释掉KUBELET_API_SERVER配置
•如果使用systemd方式启动,则需要额外增加两个参数--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
• --experimental-bootstrap-kubeconfig 在1.9版本已经变成了--bootstrap-kubeconfig
• --address 不能设置为 127.0.0.1,否则后续 Pods 访问 kubelet 的 API 接口时会失败,因为 Pods 访问的 127.0.0.1 指向自己而不是 kubelet;
•如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
• "--cgroup-driver 配置成 systemd,不要使用cgroup,否则在 CentOS 系统中 kubelet 将启动失败(保持docker和kubelet中的cgroup driver配置一致即可,不一定非使用systemd)。
• --experimental-bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
•管理员通过了 CSR 请求后,kubelet 自动在 --cert-dir 目录创建证书和私钥文件(kubelet-client.crt 和 kubelet-client.key),然后写入 --kubeconfig 文件;
•建议在 --kubeconfig 配置文件中指定 kube-apiserver 地址,如果未指定 --api-servers 选项,则必须指定 --require-kubeconfig 选项后才从配置文件中读取 kube-apiserver 的地址,否则 kubelet 启动后将找不到 kube-apiserver (日志中提示未找到 API Server),kubectl get nodes 不会返回对应的 Node 信息; --require-kubeconfig 在1.9.0版本被移除,参看PR;
• --cluster-dns 指定 kubedns 的 Service IP(可以先分配,后续创建 kubedns 服务时指定该 IP,这个ip必须是apiserver配置中--service-cluster-ip-range值范围内的值),--cluster-domain 指定域名后缀,这两个参数同时指定后才会生效;
• --cluster-domain 指定 pod 启动时 /etc/resolve.conf 文件中的 search domain ,起初我们将其配置成了 cluster.local.,这样在解析 service 的 DNS 名称时是正常的,可是在解析 headless service 中的 FQDN pod name 的时候却错误,因此我们将其修改为 cluster.local,去掉最后面的 ”点号“ 就可以解决该问题,关于 kubernetes 中的域名/服务名称解析请参见我的另一篇文章。
• --kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次启动kubelet之前并不存在,请看下文,当通过CSR请求后会自动生成kubelet.kubeconfig文件,如果你的节点上已经生成了~/.kube/config文件,你可以将该文件拷贝到该路径下,并重命名为kubelet.kubeconfig,所有node节点可以共用同一个kubelet.kubeconfig文件,这样新添加的节点就不需要再创建CSR请求就能自动添加到kubernetes集群中。同样,在任意能够访问到kubernetes集群的主机上使用kubectl --kubeconfig命令操作集群时,只要使用~/.kube/config文件就可以通过权限认证,因为这里面已经有认证信息并认为你是admin用户,对集群拥有所有权限。
• KUBELET_POD_INFRA_CONTAINER 是基础镜像容器,这里我用的是私有镜像仓库地址,大家部署的时候需要修改为自己的镜像。我上传了一个到时速云上,可以直接 docker pull index.tenxcloud.com/jimmy/pod-infrastructure:rhel7 下载。pod-infrastructure镜像是Redhat制作的,大小接近80M,下载比较耗时,其实该镜像并不运行什么具体进程,可以使用Google的pause镜像gcr.io/google_containers/pause-amd64:3.0,这个镜像只有300多K,或者通过DockerHub下载jimmysong/pause-amd64:3.0。
• --fail-swap-on=false 发须要加上 这是关闭swap的,不然kubelet启动不了。 6.3.6 启动kublet
------------------------------------------------------------
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
------------------------------------------------------------ 6.3.7 通过 kublet 的 TLS 证书请求
kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过后 kubernetes 系统才会将该 Node 加入到集群。 查看未授权的 CSR 请求
------------------------------------------------------------
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-2b308 4m kubelet-bootstrap Pending
$ kubectl get nodes
No resources found.
------------------------------------------------------------ 通过 CSR 请求
------------------------------------------------------------
$ kubectl certificate approve csr-2b308
certificatesigningrequest "csr-2b308" approved
$ kubectl get nodes
NAME STATUS AGE VERSION
10.64.3.7 Ready 49m v1.6.1
------------------------------------------------------------ 自动生成了 kubelet kubeconfig 文件和公私钥
------------------------------------------------------------
$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- root root Apr : /etc/kubernetes/kubelet.kubeconfig
$ ls -l /etc/kubernetes/ssl/kubelet*
-rw-r--r-- root root Apr : /etc/kubernetes/ssl/kubelet-client.crt
-rw------- root root Apr : /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r-- root root Apr : /etc/kubernetes/ssl/kubelet.crt
-rw------- root root Apr : /etc/kubernetes/ssl/kubelet.key
------------------------------------------------------------ 说明:
假如你更新kubernetes的证书,只要没有更新token.csv,当重启kubelet后,该node就会自动加入到kuberentes集群中,而不会重新发送certificaterequest,也不需要在master节点上执行kubectl certificate approve操作。前提是不要删除node节点上的/etc/kubernetes/ssl/kubelet*和/etc/kubernetes/kubelet.kubeconfig文件。否则kubelet启动时会提示找不到证书而失败。 注意:如果启动kubelet的时候见到证书相关的报错,有个trick可以解决这个问题,可以将master节点上的~/.kube/config文件(该文件在安装kubectl命令行工具这一步中将会自动生成)拷贝到node节点的/etc/kubernetes/kubelet.kubeconfig位置,这样就不需要通过CSR,当kubelet启动后就会自动加入的集群中。

6.4 配置 kube-proxy

6.4.1 安装conntrack
------------------------------------------------------------
yum install -y conntrack-tools
------------------------------------------------------------ 6.4.2 创建 kube-proxy 的service配置文件
文件路径/etc/systemd/system/kube-proxy.service
------------------------------------------------------------
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/opt/k8s/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
------------------------------------------------------------ 6.4.3 kube-proxy配置文件/etc/kubernetes/proxy
------------------------------------------------------------
###
# kubernetes proxy config # default config should be adequate # Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.55.36 --hostname-override=192.168.55.36 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=172.16.0.0/16"
------------------------------------------------------------
说明:
• --hostname-override 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 iptables 规则;
• kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT 此项要跟apiserver配置里的--service-cluster-ip-range 值一样;
• --kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息;
•预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限; 6.4.4 启动 kube-proxy
------------------------------------------------------------
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
------------------------------------------------------------

6.4.5 验证测试
  我们创建一个nginx的service试一下集群是否可用
  ------------------------------------------------------------
  $ kubectl get nodes
  NAME STATUS ROLES AGE VERSION
  192.168.55.36 Ready <none> 1d v1.9.0

$ kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=index.tenxcloud.com/docker_library/nginx --port=80  #--port值需要对上容器指供的端口值
  deployment "nginx" created
  $ kubectl get pods
  NAME READY STATUS RESTARTS AGE
  nginx-744c5fd44f-lwnl7 0/1 Running 0 3m
  nginx-744c5fd44f-mzrwp 0/1 Running 0 3m

$ kubectl expose deployment nginx --type=NodePort --name=example-service
  service "example-service" exposed

$ kubectl describe svc example-service  #这步要查看到下面的内容 是需要一会时间的
  Name: example-service
  Namespace: default
  Labels: run=load-balancer-example
  Annotations: <none>
  Selector: run=load-balancer-example
  Type: NodePort
  IP: 172.16.238.215
  Port: <unset> 80/TCP
  TargetPort: 80/TCP
  NodePort: <unset> 31107/TCP
  Endpoints: 10.200.75.2:80,10.200.75.3:80
  Session Affinity: None
  External Traffic Policy: Cluster
  Events: <none>

$ curl "172.16.238.215:80"

------------------------------------------------------------
  说明:
  访问192.168.55.36:31107 可以得到nginx的页面, 172.16.238.215是service_ip  10.200.75.2:80,10.200.75.3:80是容器ip 31107是宿主机映射后端service的端口(有疑问)

7.安装kubedns插件

7.1 kubedns  yml修改过的配置

[root@k8s-master /opt/k8s/yml 11:03:15&&154]#cat kube-dns.yml
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 172.16.0.2 #这个ip需要和 kubelet 的 --cluster-dns 参数值一致。
ports:
- name: dns
port:
protocol: UDP
- name: dns-tcp
port:
protocol: TCP --- apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# . In order to make Addon Manager do not reconcile this replicas parameter.
# . Default is .
# . Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: %
maxUnavailable:
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-kube-dns-amd64:1.14.9
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port:
scheme: HTTP
initialDelaySeconds:
timeoutSeconds:
successThreshold:
failureThreshold:
readinessProbe:
httpGet:
path: /readiness
port:
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds:
timeoutSeconds:
args:
- --domain=cluster.local.
- --dns-port=
- --config-dir=/kube-dns-config
- --v=
#__PILLAR__FEDERATIONS__DOMAIN__MAP__
env:
- name: PROMETHEUS_PORT
value: ""
ports:
- containerPort:
name: dns-local
protocol: UDP
- containerPort:
name: dns-tcp-local
protocol: TCP
- containerPort:
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-dnsmasq-nanny-amd64:1.14.9
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port:
scheme: HTTP
initialDelaySeconds:
timeoutSeconds:
successThreshold:
failureThreshold:
args:
- -v=
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=
- --log-facility=-
- --server=/cluster.local./127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#
- --server=/ip6.arpa/127.0.0.1#
ports:
- containerPort:
name: dns
protocol: UDP
- containerPort:
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-sidecar-amd64:1.14.9
livenessProbe:
httpGet:
path: /metrics
port:
scheme: HTTP
initialDelaySeconds:
timeoutSeconds:
successThreshold:
failureThreshold:
args:
- --v=
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
ports:
- containerPort:
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns

说明: 蓝色字体的是修改部分,上面的配置文件已经是修改过的。注意namespace的 kube-system是不能修改的

7.2执行apply命令 创建kubedns 并且查看验证

创建kubedns
[root@k8s-master /opt/k8s/yml ::&&]#kubectl apply -f kube-dns.yml
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment "kube-dns" created 查看kubedns pod
[root@k8s-master /opt/k8s/yml ::&&]#kubectl get pod --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-5c874ccb67-vqtvb / Running 29s 验证kubedns
说明: 创建个pod, 进入pod 查看/etc/resolv.conf的 nameserver是否是172.16.0.2
cat > httpd.yml << EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpd-deployment
spec:
replicas:
template:
metadata:
labels:
run: httpd
spec:
containers:
- name: httpd
image: daocloud.io/library/httpd
ports:
- containerPort:
EOF [root@k8s-master /opt/k8s/yml ::&&]#kubectl apply -f httpd.yml
deployment "httpd-deployment" created [root@k8s-master /opt/k8s/yml ::&&]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd-deployment-5c9bc776cb-x82hs / Running 34s 10.200.75.3 192.168.55.36 [root@k8s-master /opt/k8s/yml ::&&]#kubectl exec -ti httpd-deployment-5c9bc776cb-x82hs -- /bin/bash
root@httpd-deployment-5c9bc776cb-x82hs:/usr/local/apache2# cat /etc/resolv.conf
nameserver 172.16.0.2
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:
root@httpd-deployment-5c9bc776cb-x82hs:/usr/local/apache2# ping kubernetes
PING kubernetes.default.svc.cluster.local (172.16.0.1) () bytes of data.
^C
--- kubernetes.default.svc.cluster.local ping statistics ---
packets transmitted, received, % packet loss, time 17000ms 注意:直接ping ClusterIP是ping不通的,ClusterIP是根据IPtables路由到服务的endpoint上,只有结合ClusterIP加端口才能访问到对应的服务。
验证kubedns2:

[root@k8s-master /opt/k8s/yml 14:56:51&&43]#kubectl run busybox --rm -ti --image=busybox /bin/sh  #这个pod 退出即消失
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
nameserver 172.16.0.2
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # nslookup nginx-svc  #注意:这个nginx-svc 需要提前创建
Server: 172.16.0.2
Address 1: 172.16.0.2 kube-dns.kube-system.svc.cluster.local

Name: nginx-svc
Address 1: 172.16.98.222 nginx-svc.default.svc.cluster.local
/ # ping kubernetes
PING kubernetes (172.16.0.1): 56 data bytes

 
 

8.安装dashboard插件

8.1 dashboard yml 配置文件(修改过的)

apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard
namespace: kube-system --- kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dashboard
subjects:
- kind: ServiceAccount
name: dashboard
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: dashboard
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.7.1
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort:
livenessProbe:
httpGet:
path: /
port:
initialDelaySeconds:
timeoutSeconds:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists" --- apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
type: NodePort
selector:
k8s-app: kubernetes-dashboard
ports:
- port:
targetPort:

8.2 创建doshboard资源和查看资源并 页面访问

创建资源
[root@k8s-master /opt/k8s/yml ::&&]#kubectl apply -f doshboard.yml
serviceaccount "dashboard" created
clusterrolebinding "dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created 查看svc
[root@k8s-master /opt/k8s/yml ::&&]#kubectl get svc -o wide --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard NodePort 172.16.71.213 <none> :/TCP 10m k8s-app=kubernetes-dashboard 查看pod
[root@k8s-master /opt/k8s/yml ::&&]#kubectl get pod -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-dashboard-f874767d4-x8zn4 / Running 11m 10.200.75.5 192.168.55.36 访问dashboard
http://192.168.55.36:31130

在CentOS上部署kubernetes1.9.0集群的更多相关文章

  1. 使用kubeadm在CentOS上搭建Kubernetes1.14.3集群

    练习环境说明:参考1 参考2 主机名称 IP地址 部署软件 备注 M-kube12 192.168.10.12 master+etcd+docker+keepalived+haproxy master ...

  2. 在CentOS上部署多节点Citus集群

    1 在所有节点执行以下步骤 Step 01 添加Citus Repostory # Add Citus repository for package manager curl https://inst ...

  3. CentOS 7部署Kafka和Kafka集群

    CentOS 7部署Kafka和Kafka集群 注意事项 需要启动多个shell脚本交互客户端进行验证,运行中的客户端不要停止. 准备工作: 安装java并设置java环境变量,在`/etc/prof ...

  4. 使用Kubespray在ubuntu上自动部署K8s1.9.0集群

    Kubespray 是 Kubernetes incubator 中的项目,目标是提供 Production Ready Kubernetes 部署方案,该项目基础是通过 Ansible Playbo ...

  5. centos7.4安装高可用(haproxy+keepalived实现)kubernetes1.6.0集群(开启TLS认证)

    目录 目录 前言 集群详情 环境说明 安装前准备 提醒 一.创建TLS证书和秘钥 安装CFSSL 创建 CA (Certificate Authority) 创建 CA 配置文件 创建 CA 证书签名 ...

  6. 最新二进制安装部署kubernetes1.15.6集群---超详细教程

    00.组件版本和配置策略 00-01.组件版本 Kubernetes 1.15.6 Docker docker-ce-18.06.1.ce-3.el7 Etcd v3.3.13 Flanneld v0 ...

  7. CentOS6.4上搭建hadoop-2.4.0集群

    公司Commerce Cloud平台上提供申请主机的服务.昨天试了下,申请了3台机器,搭了个hadoop环境.以下是机器的一些配置: emi-centos-6.4-x86_64medium | 6GB ...

  8. 来了,老弟!__二进制部署kubernetes1.11.7集群

    Kubernetes容器集群管理 Kubernetes介绍 Kubernetes是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,Kubernetes也叫K8S.K8S是Go ...

  9. Kubeadm部署-Kubernetes-1.18.6集群

    环境配置 IP hostname 操作系统 10.11.66.44 k8s-master centos7.6 10.11.66.27 k8s-node1 centos7.7 10.11.66.28 k ...

随机推荐

  1. PHP批量生成底部带编号二维码(二维码生成+文字生成图片+图片拼接合并)

    PHP批量生成带底部编号二维码(二维码生成+文字生成图片+图片拼接合并) 需求: 输入编号如 : cb05-0000001  至 cb05-0000500 批量生成 以编号为名称的下图二维码,然后压缩 ...

  2. Using Tensorflow SavedModel Format to Save and Do Predictions

    We are now trying to deploy our Deep Learning model onto Google Cloud. It is required to use Google ...

  3. Jquery的Ajax实现异步刷新

    在Jquery中提供了一套ajax的方法,有: $.ajax([data],fn) load(url, [data], [callback]) $.get(url, [data], [callback ...

  4. Flutter 仿滴滴出行App

    绿色出行 Flutter 仿滴滴出行App 地图:采用高德地图,仅简单完成了部分功能,基础地图,地址检索,逆地理编码. 界面:仿滴滴主界面,地图中心请求动效果,服务tabs展开效果,地址检索界面,城市 ...

  5. 图例演示在Linux上快速安装软RAID的详细步骤

    物理环境:虚拟机centos6.4 配置:8G内存.2*2核cpu.3块虚拟硬盘(sda,sdb,sdc,sdb和sdc是完全一样的)        在实际生产环境中,系统硬盘与数据库和应用是分开的, ...

  6. 独立成分分析(Independent Component Analysis)

    ICA是一种用于在统计数据中寻找隐藏的因素或者成分的方法.ICA是一种广泛用于盲缘分离的(BBS)方法,用于揭示随机变量或者信号中隐藏的信息.ICA被用于从混合信号中提取独立的信号信息.ICA在20世 ...

  7. Linux ssh黄金参数

    Linux ssh黄金参数 命令: -o ConnectionAttempts= -o PasswordAuthentication=no -o StrictHostKeyChecking=no cp ...

  8. Codeforces 722E 组合数学 DP

    题意:有一个n * m的棋盘,你初始在点(1, 1),你需要去点(n, m).你初始有s分,在这个棋盘上有k个点,经过一次这个点分数就会变为s / 2(向上取整),问从起点到终点的分数的数学期望是多少 ...

  9. CS184.1X 计算机图形学导论(第三讲)

    第一单元(介绍关于变换的数学知识) :基本二维变换 模型坐标系,世界坐标系 1.缩放 Scale(规模,比例) Sx表示在x方向上放大的倍数,Sy表示在y方向上放大的倍数,因此X坐标乘以Sx,Y坐标乘 ...

  10. 【Leetcode周赛】从contest-51开始。(一般是10个contest写一篇文章)

    Contest 51 (2018年11月22日,周四早上)(题号681-684) 链接:https://leetcode.com/contest/leetcode-weekly-contest-51 ...