k8s-1.16 二进制安装
环境机器配置:
172.16.153.70 master
172.16.77.121 node1
172.16.77.122 node2
系统初始化
[root@iZbp1c31t0jo4w553hd47uZ ~]# systemctl stop firewalld
[root@iZbp1c31t0jo4w553hd47uZ ~]# systemctl disable firewalld
[root@iZbp1c31t0jo4w553hd47uZ ~]# cat /etc/selinux/config |grep disable
# disabled - No SELinux policy is loaded.
SELINUX=disabled
[root@iZbp1c31t0jo4w553hd47uZ ~]# swapoff -a
[root@iZbp1c31t0jo4w553hd47uZ ~]# ntpdate time.windows.com
9 Oct 10:56:17 ntpdate[1365]: adjust time server 20.189.79.72 offset -0.004069 sec
[root@iZbp1c31t0jo4w553hd47uZ ~]# vim /etc/hosts
[root@iZbp1c31t0jo4w553hd47uZ ~]# cat /etc/hosts
172.16.153.70 master
172.16.77.121 node1
172.16.77.122 node2
[root@iZbp1c31t0jo4w553hd47uZ ~]# hostnamectl set-hostname master
上传相关文件到服务器指定目录
[root@master ~]# ll
total 229408
-rw-r--r-- 1 root root 10148977 Oct 3 10:34 etcd.tar.gz https://pan.baidu.com/s/1PnEq-hBtWq9ZFmsbNwrKvA
-rw-r--r-- 1 root root 2296 Oct 3 13:02 HA.zip https://pan.baidu.com/s/1dSk6r2_Cxoi-jI9_seccHQ
-rw-r--r-- 1 root root 90767613 Oct 3 10:35 k8s-master.tar.gz https://pan.baidu.com/s/1dvbG5znazSugVOq_hB4mMg
-rw-r--r-- 1 root root 128129460 Oct 3 10:33 k8s-node.tar.gz https://pan.baidu.com/s/1peULtnSvBIdZsw0H6-5hEw
-rw-r--r-- 1 root root 5851667 Oct 3 10:46 TLS.tar.gz https://pan.baidu.com/s/1u8m6-Hyt2t7cfBOgzsdXzQ
安装etcd集群
使用cfssl自签etcd的证书,
安装cfssl工具:
[root@master ~]# tar xf TLS.tar.gz
[root@master ~]# ls
etcd.tar.gz HA.zip k8s-master.tar.gz k8s-node.tar.gz TLS TLS.tar.gz
[root@master ~]# ls TLS
cfssl cfssl-certinfo cfssljson cfssl.sh etcd k8s
[root@master TLS]# cat cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
#cp -rf cfssl cfssl-certinfo cfssljson /usr/local/bin
chmod +x /usr/local/bin/cfssl*
[root@master TLS]# sh cfssl.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9.8M 100 9.8M 0 0 2293k 0 0:00:04 0:00:04 --:--:-- 2294k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2224k 100 2224k 0 0 1089k 0 0:00:02 0:00:02 --:--:-- 1088k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6440k 100 6440k 0 0 2230k 0 0:00:02 0:00:02 --:--:-- 2231k
[root@master TLS]# ll /usr/local/bin/cfssl*
-rwxr-xr-x 1 root root 10376657 Oct 9 11:24 /usr/local/bin/cfssl
-rwxr-xr-x 1 root root 6595195 Oct 9 11:24 /usr/local/bin/cfssl-certinfo
-rwxr-xr-x 1 root root 2277873 Oct 9 11:24 /usr/local/bin/cfssljson
生成etcd证书:
[root@master TLS]# cd etcd/
[root@master etcd]# pwd
/root/TLS/etcd
[root@master etcd]# ll
total 16
-rw-r--r-- 1 root root 287 Oct 3 13:12 ca-config.json
-rw-r--r-- 1 root root 209 Oct 3 13:12 ca-csr.json
-rwxr-xr-x 1 root root 178 Oct 3 13:58 generate_etcd_cert.sh
-rw-r--r-- 1 root root 306 Oct 3 08:26 server-csr.json
[root@master etcd]# cat generate_etcd_cert.sh
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
初始化一个CA:
[root@master etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2019/10/09 11:27:50 [INFO] generating a new CA key and certificate from CSR
2019/10/09 11:27:50 [INFO] generate received request
2019/10/09 11:27:50 [INFO] received CSR
2019/10/09 11:27:50 [INFO] generating key: rsa-2048
2019/10/09 11:27:51 [INFO] encoded CSR
2019/10/09 11:27:51 [INFO] signed certificate with serial number 243984200992790636783468017675717297449835481076
[root@master etcd]# ll *pem
-rw------- 1 root root 1679 Oct 9 11:27 ca-key.pem #ca的私钥
-rw-r--r-- 1 root root 1265 Oct 9 11:27 ca.pem #ca数字证书
告诉ca要为etcd颁发一个证书:
[root@master etcd]# cat server-csr.json
{
"CN": "etcd",
"hosts": [
"172.16.153.70",
"172.16.77.121",
"172.16.77.122"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
[root@master etcd]# cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
生成etcd的证书:
[root@master etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2019/10/09 11:33:59 [INFO] generate received request
2019/10/09 11:33:59 [INFO] received CSR
2019/10/09 11:33:59 [INFO] generating key: rsa-2048
2019/10/09 11:33:59 [INFO] encoded CSR
2019/10/09 11:33:59 [INFO] signed certificate with serial number 730704670326462109576871660342343616627819385700
2019/10/09 11:33:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master etcd]# ll server*.pem
-rw------- 1 root root 1675 Oct 9 11:33 server-key.pem
-rw-r--r-- 1 root root 1338 Oct 9 11:33 server.pem
解压etcd:
[root@master ~]# tar xf etcd.tar.gz
[root@master ~]# ll
total 229420
drwxr-xr-x 5 root root 4096 Oct 2 22:13 etcd
-rw-r--r-- 1 root root 1078 Oct 2 23:10 etcd.service
[root@master ~]# ll /opt/
total 0
[root@master ~]# tree etcd
etcd
├── bin
│ ├── etcd
│ └── etcdctl
├── cfg
│ └── etcd.conf
└── ssl
├── ca.pem
├── server-key.pem
└── server.pem
3 directories, 6 files
修改etcd配置文件:
[root@master cfg]# cat etcd.conf
#[Member]
ETCD_NAME="etcd-1" #etcd集群名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #etcd存放数据的目录
ETCD_LISTEN_PEER_URLS="https://172.16.153.70:2380" #etcd集群内部之间通信
ETCD_LISTEN_CLIENT_URLS="https://172.16.153.70:2379" #etcd客户端的与外部通信的
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.153.70:2380" #集群内部节点通信的地址
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.153.70:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.153.70:2380,etcd-2=https://172.16.77.121:2380,etcd-3=https://172.16.77.122:2380" #集群节点连接信息
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #用于集群内部节点之间的认证
ETCD_INITIAL_CLUSTER_STATE="new" #集群的状态,新建为new
修改etcd的启动文件:
[root@master ~]# cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \ #集群外部客户端通信的
--key-file=/opt/etcd/ssl/server-key.pem \ ##集群外部客户端通信的
--peer-cert-file=/opt/etcd/ssl/server.pem \ #集群内部
--peer-key-file=/opt/etcd/ssl/server-key.pem \ #集群内部
--trusted-ca-file=/opt/etcd/ssl/ca.pem \ #集群外部客户端通信的
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem #集群内部
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
拷贝ssl证书到指定目录:
[root@master ssl]# cp /root/TLS/etcd/{ca,server,server-key}.pem .
[root@master ssl]# ll
total 12
-rw-r--r-- 1 root root 1265 Oct 9 11:55 ca.pem
-rw------- 1 root root 1675 Oct 9 11:55 server-key.pem
-rw-r--r-- 1 root root 1338 Oct 9 11:55 server.pem
[root@master ssl]# cd /root/
[root@master ~]# mv etcd /opt/
[root@master ~]# tree /opt/
/opt/
└── etcd
├── bin
│ ├── etcd
│ └── etcdctl
├── cfg
│ └── etcd.conf
└── ssl
├── ca.pem
├── server-key.pem
└── server.pem
4 directories, 6 files
拷贝etcd文件到node1,node2节点机器上:
[root@master ~]# scp -r /opt/etcd root@172.16.77.121:/opt
[root@master ~]# scp -r /opt/etcd root@172.16.77.122:/opt
修改node1,node2上的cfg/etcd.conf 中的ip信息;
拷贝服务启动文件,到对应的机器目录下去:
[root@master ~]# scp etcd.service root@172.16.77.122:/usr/lib/systemd/system
root@172.16.77.122's password:
etcd.service 100% 1078 650.1KB/s 00:00
[root@master ~]# scp etcd.service root@172.16.77.121:/usr/lib/systemd/system
root@172.16.77.121's password:
etcd.service 100% 1078 729.9KB/s 00:00
[root@master ~]#
[root@master ~]#
[root@master ~]# scp etcd.service /usr/lib/systemd/system
启动etcd服务:
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start etcd
[root@master ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
查看etcd集群状态:
[root@master ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.77.121:2379,https://172.16.77.122:2379,https://172.16.153.70:2379" cluster-health
member 43721c4082c10e0d is healthy: got healthy result from https://172.16.153.70:2379
member c0a09aa80ae1d891 is healthy: got healthy result from https://172.16.77.122:2379
member c38568aa50bbde03 is healthy: got healthy result from https://172.16.77.121:2379
cluster is healthy
安装master节点组件
自签api-server证书
[root@master ~]# cd TLS/k8s/
[root@master k8s]# ll
total 20
-rw-r--r-- 1 root root 294 Oct 3 13:12 ca-config.json #apiserver的CA
-rw-r--r-- 1 root root 263 Oct 3 13:12 ca-csr.json
-rwxr-xr-x 1 root root 321 Oct 3 08:46 generate_k8s_cert.sh
-rw-r--r-- 1 root root 230 Oct 3 13:12 kube-proxy-csr.json #为worke节点的proxy颁发的证书的文件
-rw-r--r-- 1 root root 718 Oct 3 08:45 server-csr.json #为apiserver颁发的证书的文件
[root@master k8s]# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
[root@master k8s]# cat server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"172.16.153.70",
"172.16.77.121",
"172.16.77.121"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
生成证书:
[root@master k8s]# ./generate_k8s_cert.sh
2019/10/09 14:20:36 [INFO] generating a new CA key and certificate from CSR
2019/10/09 14:20:36 [INFO] generate received request
2019/10/09 14:20:36 [INFO] received CSR
2019/10/09 14:20:36 [INFO] generating key: rsa-2048
2019/10/09 14:20:37 [INFO] encoded CSR
2019/10/09 14:20:37 [INFO] signed certificate with serial number 455662557732513862994369493822352115010355965578
2019/10/09 14:20:37 [INFO] generate received request
2019/10/09 14:20:37 [INFO] received CSR
2019/10/09 14:20:37 [INFO] generating key: rsa-2048
2019/10/09 14:20:37 [INFO] encoded CSR
2019/10/09 14:20:37 [INFO] signed certificate with serial number 608159772956440920681276106363224448757755871864
2019/10/09 14:20:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/10/09 14:20:37 [INFO] generate received request
2019/10/09 14:20:37 [INFO] received CSR
2019/10/09 14:20:37 [INFO] generating key: rsa-2048
2019/10/09 14:20:37 [INFO] encoded CSR
2019/10/09 14:20:37 [INFO] signed certificate with serial number 214111655217147844580852409042396632141577923697
2019/10/09 14:20:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s]# ll *pem
-rw------- 1 root root 1675 Oct 9 14:20 ca-key.pem #CA用的证书
-rw-r--r-- 1 root root 1359 Oct 9 14:20 ca.pem
-rw------- 1 root root 1675 Oct 9 14:20 kube-proxy-key.pem #node要用的证书
-rw-r--r-- 1 root root 1403 Oct 9 14:20 kube-proxy.pem
-rw------- 1 root root 1675 Oct 9 14:20 server-key.pem #apiserver要用的证书
-rw-r--r-- 1 root root 1627 Oct 9 14:20 server.pem
在master节点上操作:
二进制包下载地址:
https://pan.baidu.com/s/1dvbG5znazSugVOq_hB4mMg
部署kube-apiserver组件
[root@master ~]# tar xf k8s-master.tar.gz
[root@master ~]# ll
total 229432
-rw-r--r-- 1 root root 1078 Oct 2 23:10 etcd.service
-rw-r--r-- 1 root root 10148977 Oct 3 10:34 etcd.tar.gz
-rw-r--r-- 1 root root 2296 Oct 3 13:02 HA.zip
-rw-r--r-- 1 root root 90767613 Oct 3 10:35 k8s-master.tar.gz
-rw-r--r-- 1 root root 128129460 Oct 3 10:33 k8s-node.tar.gz
-rw-r--r-- 1 root root 286 Oct 2 23:13 kube-apiserver.service
-rw-r--r-- 1 root root 321 Oct 2 23:13 kube-controller-manager.service
drwxr-xr-x 6 root root 4096 Oct 2 22:13 kubernetes
-rw-r--r-- 1 root root 285 Oct 2 23:13 kube-scheduler.service
drwxr-xr-x 4 root root 4096 Oct 9 11:24 TLS
-rw-r--r-- 1 root root 5851667 Oct 3 10:46 TLS.tar.gz
[root@master ~]# cd kubernetes/
[root@master kubernetes]# ll
total 16
drwxr-xr-x 2 root root 4096 Oct 3 09:06 bin
drwxr-xr-x 2 root root 4096 Oct 3 08:55 cfg
drwxr-xr-x 2 root root 4096 Oct 2 23:13 logs
drwxr-xr-x 2 root root 4096 Oct 3 10:34 ssl
[root@master kubernetes]# tree
.
├── bin
│ ├── kube-apiserver
│ ├── kube-controller-manager
│ ├── kubectl
│ └── kube-scheduler
├── cfg
│ ├── kube-apiserver.conf
│ ├── kube-controller-manager.conf
│ ├── kube-scheduler.conf
│ └── token.csv
├── logs
└── ssl
先拷贝证书:
[root@master kubernetes]# cp /root/TLS/k8s/*pem ssl/
[root@master kubernetes]# tree
.
├── bin
│ ├── kube-apiserver
│ ├── kube-controller-manager
│ ├── kubectl
│ └── kube-scheduler
├── cfg
│ ├── kube-apiserver.conf
│ ├── kube-controller-manager.conf
│ ├── kube-scheduler.conf
│ └── token.csv
├── logs
└── ssl
├── ca-key.pem
├── ca.pem
├── server-key.pem
└── server.pem
4 directories, 12 files
修改apiserver配置文件信息:
[root@master cfg]# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \ #输出日志
--v=2 \ #日志级别
--log-dir=/opt/kubernetes/logs \ #日志存储目录
--etcd-servers=https://172.16.153.70:2379,https://172.16.77.121:2379,https://172.16.77.122:2379 \ #etcd服务的地址
--bind-address=172.16.153.70 \ #apiserver监听的地址
--secure-port=6443 \ #apiserver监听的端口
--advertise-address=172.16.153.70 \ #apiserver通告地址,与node通信
--allow-privileged=true \ #允许创建的容器赋予的权限
--service-cluster-ip-range=10.0.0.0/24 \ #集群内部的service的虚拟ip
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ #启用准入控制插件
--authorization-mode=RBAC,Node \ #授权模式,指定的用户赋予相应的权限
--enable-bootstrap-token-auth=true \ #启用bootstrap的token认证
--token-auth-file=/opt/kubernetes/cfg/token.csv \ #token文件
--service-node-port-range=30000-32767 \ #service 的nodeport的范围
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \ #访问kubelet的证书
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \ #配置apiserver使用https的证书
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \ #连接etcd的证书
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \ #日志策略
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
[root@master ~]# mv kubernetes /opt/
[root@master ~]# mv kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system/
启动apiserver服务:
[root@master ~]# systemctl start kube-apiserver
[root@master ~]# ps -ef|grep kube
root 11567 1 71 14:52 ? 00:00:04 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://172.16.153.70:2379,https://172.16.77.121:2379,https://172.16.77.122:2379 --bind-address=172.16.153.70 --secure-port=6443 --advertise-address=172.16.153.70 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log
root 11579 1401 0 14:52 pts/0 00:00:00 grep --color=auto kube
部署kube-controller-manager组件
[root@master cfg]# cat kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \ #集群的选举
--master=127.0.0.1:8080 \ #连接apiserver的地址,也就是master地址
--address=127.0.0.1 \
--allocate-node-cidrs=true \ #允许安装cni的插件
--cluster-cidr=10.244.0.0/16 \ #与cni的插件网段一样
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ #集群内部证书
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \ #用于签署service-account的证书
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s" #为每个node颁发kubelet的证书时间,10年
[root@master ~]# systemctl start kube-controller-manager
[root@master ~]# ps -ef|grep controller
root 11589 1 5 14:54 ? 00:00:01 /opt/kubernetes/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --master=127.0.0.1:8080 --address=127.0.0.1 --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 --service-cluster-ip-range=10.0.0.0/24 --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s
root 11615 1401 0 14:54 pts/0 00:00:00 grep --color=auto controller
部署kube-scheduler组件
[root@master cfg]# cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"
[root@master ~]# systemctl start kube-scheduler
[root@master ~]# ps -ef|grep scheduler
root 11603 1 6 14:54 ? 00:00:01 /opt/kubernetes/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --master=127.0.0.1:8080 --address=127.0.0.1
root 11617 1401 0 14:54 pts/0 00:00:00 grep --color=auto scheduler
设置开机启动:
[root@master ~]# for i in $(ls /opt/kubernetes/bin);do systemctl enable $i;done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Failed to execute operation: No such file or directory
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master ~]# mv /opt/kubernetes/bin/kubectl /usr/local/bin/
[root@master ~]# kubectl get cs
NAME AGE
scheduler <unknown>
controller-manager <unknown>
etcd-0 <unknown>
etcd-1 <unknown>
etcd-2 <unknown>
自动的为kubelet颁发证书,启用了TLS bootstrapping机制
[root@master ~]# cd /opt/kubernetes/cfg/
[root@master cfg]# ll
total 16
-rw-r--r-- 1 root root 1193 Oct 9 14:32 kube-apiserver.conf
-rw-r--r-- 1 root root 546 Oct 2 22:14 kube-controller-manager.conf
-rw-r--r-- 1 root root 148 Oct 2 22:14 kube-scheduler.conf
-rw-r--r-- 1 root root 83 Oct 2 22:14 token.csv
[root@master cfg]# cat token.csv
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
给kubelet-bootstrap授权:
[root@master cfg]# kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
token也可自行生成替换:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
部署node节点组件:
在node上操作:
准备配置文件:
二进制包下载地址:https://pan.baidu.com/s/1peULtnSvBIdZsw0H6-5hEw
[root@master cfg]# cd /root/
[root@master ~]# scp -r k8s-node.tar.gz root@172.16.77.121:/root/
root@172.16.77.121's password:
k8s-node.tar.gz 100% 122MB 76.4MB/s 00:01
[root@node1 ~]# tar xf k8s-node.tar.gz
[root@node1 ~]# ll
total 207876
-rw-r--r-- 1 root root 36662740 Aug 15 19:33 cni-plugins-linux-amd64-v0.8.2.tgz
-rw-r--r-- 1 root root 110 Oct 3 10:01 daemon.json
-rw-r--r-- 1 root root 48047231 Jun 25 16:45 docker-18.09.6.tgz
-rw-r--r-- 1 root root 501 Oct 3 10:01 docker.service
-rw-r--r-- 1 root root 128129460 Oct 9 15:06 k8s-node.tar.gz
-rw-r--r-- 1 root root 268 Oct 2 23:11 kubelet.service
-rw-r--r-- 1 root root 253 Oct 2 23:11 kube-proxy.service
drwxr-xr-x 6 root root 4096 Oct 2 22:14 kubernetes
安装docker:
[root@node1 ~]# tar xf docker-18.09.6.tgz
[root@node1 ~]# ll
total 207880
-rw-r--r-- 1 root root 36662740 Aug 15 19:33 cni-plugins-linux-amd64-v0.8.2.tgz
-rw-r--r-- 1 root root 110 Oct 3 10:01 daemon.json
drwxrwxr-x 2 1000 1000 4096 May 4 10:42 docker
-rw-r--r-- 1 root root 48047231 Jun 25 16:45 docker-18.09.6.tgz
-rw-r--r-- 1 root root 501 Oct 3 10:01 docker.service
-rw-r--r-- 1 root root 128129460 Oct 9 15:06 k8s-node.tar.gz
-rw-r--r-- 1 root root 268 Oct 2 23:11 kubelet.service
-rw-r--r-- 1 root root 253 Oct 2 23:11 kube-proxy.service
drwxr-xr-x 6 root root 4096 Oct 2 22:14 kubernetes
[root@node1 ~]# mv docker/* /usr/bin/
[root@node1 ~]# mkdir /etc/docker
[root@node1 ~]# mv daemon.json /etc/docker/
[root@node1 ~]# mv docker.service /usr/lib/systemd/system/
[root@node1 ~]# systemctl start docker
[root@node1 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@node1 ~]# docker version
Client: Docker Engine - Community
Version: 18.09.6
API version: 1.39
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:33:34 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.6
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:41:08 2019
OS/Arch: linux/amd64
Experimental: false
在节点上操作部署kubelet组件
[root@node1 ~]# cd kubernetes/
[root@node1 kubernetes]# ll
total 16
drwxr-xr-x 2 root root 4096 Oct 3 09:07 bin
drwxr-xr-x 2 root root 4096 Oct 3 09:45 cfg
drwxr-xr-x 2 root root 4096 Oct 3 09:18 logs
drwxr-xr-x 2 root root 4096 Oct 3 09:18 ssl
[root@node1 kubernetes]# tree
.
├── bin
│ ├── kubelet
│ └── kube-proxy
├── cfg
│ ├── bootstrap.kubeconfig
│ ├── kubelet.conf
│ ├── kubelet-config.yml
│ ├── kube-proxy.conf
│ ├── kube-proxy-config.yml
│ └── kube-proxy.kubeconfig
├── logs
└── ssl
4 directories, 8 files
conf 基本配置文件
kubeconfig 连接apiserver的配置文件
yaml 常用配置文件
[root@node1 cfg]# cat kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=node1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
[root@node1 cfg]# cat bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /opt/kubernetes/ssl/ca.pem
server: https://172.16.153.70:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: c47ffb939f5ca36231d9e3121a252940
[root@master cfg]# cat token.csv
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
[root@node1 cfg]# cat kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
[root@node1 ~]# cat kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Before=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
移动service文件:
[root@node1 ~]# mv *service /usr/lib/systemd/system/
从master的TLS下面拷贝证书到node节点上
[root@master k8s]# scp -r ca.pem kube-proxy*pem root@172.16.77.121:/opt/kubernetes/ssl/
root@172.16.77.121's password:
ca.pem 100% 1359 1.0MB/s 00:00
kube-proxy-key.pem 100% 1675 1.3MB/s 00:00
kube-proxy.pem 100% 1403 1.1MB/s 00:00
启动kubelet服务:
[root@node1 kubernetes]# systemctl start kubelet
[root@node1 kubernetes]# ps aux|grep kubelet
root 12046 0.1 0.9 430624 37152 ? Ssl 17:02 0:00 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=node1 --network-plugin=cni --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet-config.yml --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=lizhenliang/pause-amd64:3.0
在master机器上验证下:
[root@master k8s]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-igpk3RzozP8Wo_MPjcRSzgwQXTNBZdWKkHI5Ofnp_oo 118s kubelet-bootstrap Pending
[root@master k8s]# kubectl certificate approve node-csr-igpk3RzozP8Wo_MPjcRSzgwQXTNBZdWKkHI5Ofnp_oo
certificatesigningrequest.certificates.k8s.io/node-csr-igpk3RzozP8Wo_MPjcRSzgwQXTNBZdWKkHI5Ofnp_oo approved
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady <none> 47s v1.16.0
需要进行cni插件部署完成,node1状态就会变成ready;
在上述过程中会生成证书和配置文件:
[root@node1 kubernetes]# ll ssl/
total 24
-rw-r--r-- 1 root root 1359 Oct 9 17:01 ca.pem
-rw------- 1 root root 1265 Oct 9 17:04 kubelet-client-2019-10-09-17-04-39.pem
lrwxrwxrwx 1 root root 58 Oct 9 17:04 kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2019-10-09-17-04-39.pem
-rw-r--r-- 1 root root 2144 Oct 9 16:57 kubelet.crt
-rw------- 1 root root 1675 Oct 9 16:57 kubelet.key
-rw------- 1 root root 1675 Oct 9 17:01 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 Oct 9 17:01 kube-proxy.pem
[root@node1 kubernetes]# ll cfg/
total 28
-rw-r--r-- 1 root root 376 Oct 9 16:28 bootstrap.kubeconfig
-rw-r--r-- 1 root root 388 Oct 9 16:25 kubelet.conf
-rw-r--r-- 1 root root 611 Oct 2 22:15 kubelet-config.yml
-rw------- 1 root root 505 Oct 9 17:04 kubelet.kubeconfig
-rw-r--r-- 1 root root 132 Oct 3 09:16 kube-proxy.conf
-rw-r--r-- 1 root root 315 Oct 9 16:48 kube-proxy-config.yml
-rw-r--r-- 1 root root 432 Oct 9 16:45 kube-proxy.kubeconfig
部署kube-proxy
相关配置文件如下:
[root@node1 cfg]# cat kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
[root@node1 cfg]# cat kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node1
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
scheduler: "rr"
iptables:
masqueradeAll: true
[root@node1 cfg]# cat kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /opt/kubernetes/ssl/ca.pem
server: https://172.16.153.70:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
client-key: /opt/kubernetes/ssl/kube-proxy-key.pem
[root@node1 ~]# cat kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@node2 kubernetes]# systemctl start kube-proxy
[root@node2 kubernetes]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node2 kubernetes]# ps axu|grep kube-proxy
root 11891 0.4 0.6 142856 23436 ? Ssl 17:21 0:00 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-proxy-config.yml
root 12049 0.0 0.0 112708 988 pts/0 S+ 17:21 0:00 grep --color=auto kube-proxy
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady <none> 17m v1.16.0
node2 NotReady <none> 3m12s v1.16.0
部署CNI网络
二进制包下载地址:https://github.com/containernetworking/plugins/releases
在node节点上操作如下
[root@node1 ~]# ll
total 207880
-rw-r--r-- 1 root root 36662740 Aug 15 19:33 cni-plugins-linux-amd64-v0.8.2.tgz
[root@node1 ~]# mkdir /opt/cni/bin -p
[root@node1 ~]# mkdir /opt/cni/net.d -p
[root@node1 ~]# tar xf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
[root@node1 ~]# ll /opt/cni/bin/
total 70072
-rwxr-xr-x 1 root root 4159253 Aug 15 18:05 bandwidth
-rwxr-xr-x 1 root root 4619706 Aug 15 18:05 bridge
-rwxr-xr-x 1 root root 12124236 Aug 15 18:05 dhcp
-rwxr-xr-x 1 root root 5968249 Aug 15 18:05 firewall
-rwxr-xr-x 1 root root 3069474 Aug 15 18:05 flannel
-rwxr-xr-x 1 root root 4113755 Aug 15 18:05 host-device
-rwxr-xr-x 1 root root 3614305 Aug 15 18:05 host-local
-rwxr-xr-x 1 root root 4275238 Aug 15 18:05 ipvlan
-rwxr-xr-x 1 root root 3178836 Aug 15 18:05 loopback
-rwxr-xr-x 1 root root 4337932 Aug 15 18:05 macvlan
-rwxr-xr-x 1 root root 3891363 Aug 15 18:05 portmap
-rwxr-xr-x 1 root root 4542556 Aug 15 18:05 ptp
-rwxr-xr-x 1 root root 3392736 Aug 15 18:05 sbr
-rwxr-xr-x 1 root root 2885430 Aug 15 18:05 static
-rwxr-xr-x 1 root root 3261232 Aug 15 18:05 tuning
-rwxr-xr-x 1 root root 4275044 Aug 15 18:05 vlan
确保kubelet使用cni接口:
[root@node1 ~]# cat /opt/kubernetes/cfg/kubelet.conf |grep cni
--network-plugin=cni \
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
启动cni接口插件:
需要在master上操作执行,如下
[root@master ~]# ll kube-flannel.yaml
-rw-r--r-- 1 root root 5032 Oct 2 22:16 kube-flannel.yaml
网络模式;
平台
用的是宿主机的网络;
[root@master ~]# kubectl apply -f kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
[root@master ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-7c962 1/1 Running 0 41s
kube-flannel-ds-amd64-g4b4q 1/1 Running 0 41s
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 32m v1.16.0
node2 Ready <none> 18m v1.16.0
授权可以用logs查看pod的日志:
[root@master ~]# ll apiserver-to-kubelet-rbac.yaml
-rw-r--r-- 1 root root 745 Oct 2 22:14 apiserver-to-kubelet-rbac.yaml
[root@master ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
部署UI
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
在master机器上操作:
[root@master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
[root@master ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
本次实验采用的是nodeport方式暴露端口
[root@master ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-566cddb686-h5f4h 1/1 Running 0 81s
kubernetes-dashboard-7b5bf5d559-49ltn 1/1 Running 0 81s
[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.0.0.225 <none> 8000/TCP 96s
kubernetes-dashboard NodePort 10.0.0.19 <none> 443:30001/TCP 96s
用token去认证:
创建用户
[root@master ~]# ll dashboard-adminuser.yaml
-rw-r--r-- 1 root root 373 Oct 4 14:50 dashboard-adminuser.yaml
[root@master ~]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
获取token:
[root@master ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-zp7r5
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 26a698fd-afd2-482d-9d00-e2a9352113fd
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1359 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjFSSGFvclJuU0ZmeDM2SG10YzNaUW9CdDY0UGd4MzZabHZ1dlRXM2J4NVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXpwN3I1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyNmE2OThmZC1hZmQyLTQ4MmQtOWQwMC1lMmE5MzUyMTEzZmQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.BHZuAKbogTeX2tImy0_Ia02jQEpo0chQP4OJaVo78YfC5NoFBr3n56xhpZ_m-GdcOAN2dt1Z2uPQfJ1bpPP0lObKhK8xEtKaoXm0UUHjsDHlBrOTCcOoaPkMFkLVLulU_BwZecrCO1wyNI1U_dLBX4WjBYp_4PFqU3HflYFvYzUYImkzNKot-GibBiH_9pedYTTkhOFljQWz_sSWMNr6AnhTpNDvut8m1uTgQepMTWzqmZweHtwSt4owbWqsIroviUxcZy9NQ0YxvcNod4l1ppp07h71ECuhhNzGusECTE3ANO2aX4La0kmsg9C0-QlMckIF_1cXbzgBVTZsqxLFrQ
部署DNS
[root@master ~]# ll coredns.yaml
-rw-r--r-- 1 root root 4283 Oct 2 22:16 coredns.yaml
[root@master ~]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8cfdd59d-7ncxd 1/1 Running 0 8s
kube-flannel-ds-amd64-7c962 1/1 Running 0 26m
kube-flannel-ds-amd64-g4b4q 1/1 Running 0 26m
k8s-1.16 二进制安装的更多相关文章
- mysql8.0.16二进制安装
mysql8.0.16二进制安装 环境简介操作系统:Centos 6.10 64位 目前版本:8.0.16 MySQL Community Server 二进制 安装目录:/data/mysql/my ...
- [k8s]prometheus+alertmanager二进制安装实现简单邮件告警
本次任务是用alertmanaer发一个报警邮件 本次环境采用二进制普罗组件 本次准备监控一个节点的内存,当使用率大于2%时候(测试),发邮件报警. k8s集群使用普罗官方文档 环境准备 下载二进制h ...
- mysql5.7.16二进制安装
1.下载二进制文件 cd /data wget http://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.16-linux-glibc2.5-x ...
- kubernetes实战(二十七):CentOS 8 二进制 高可用 安装 k8s 1.16.x
1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.16.x,相对于其他版本,二进制安装方式并无太大区别.CentOS 8相对于CentOS 7操作更加方便,比如一些服务的关闭 ...
- 二进制安装K8S集群V1.16.3
centos linux7.5 cat > /etc/hosts << EOF 192.168.199.221 master 192.168.199.222 node1 192.16 ...
- 8、二进制安装K8s之部署CIN网络
二进制安装K8s之部署CIN网络 部署CIN网络可以使用flannel或者calico,这里介绍使用calico ecd 方式部署. 1.下载calico二进制安装包 创建所需目录 mkdir -p ...
- 3、二进制安装K8s之部署kube-apiserver
二进制安装K8s之部署kube-apiserver 一.生成 kube-apiserver 证书 1.自签证书颁发机构(CA) cat > ca-config.json <<EOF ...
- 4、二进制安装K8s 之 部署kube-controller-manager
二进制安装K8s 之 部署kube-controller-manager 1.创建配置文件 cat > /data/k8s/config/kube-controller-manager.conf ...
- 1、二进制安装K8s 之 环境准备
二进制安装K8s 之 环境准备 1.系统&软件 序号 设备\系统 版本 1 宿主机 MacBook Pro 11.4 2 系统 Centos 7.8 3 虚拟机 Parallels Deskt ...
随机推荐
- 11G利用隐含参数,修改用户名
步骤概述: 1. 停库,修改隐含参数_enable_rename_user 为true 2. 以 restrict方式启动数据库 3. alter user aaaa rename to ...
- 【TCP】四次握手原因 / TIME_WAIT作用
为什么建立TCP连接需要三次握手? 原因:为了应对网络中存在的延迟的重复数组的问题 例子: 假设client发起连接的连接请求报文段在网络中没有丢失,而是在某个网络节点长时间滞留了,导致延迟到达ser ...
- 【纪中集训】2019.08.02【NOIP提高组】模拟 A 组TJ
\(\newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1\relax}}\) T1 一道可以暴力撵标算的题-- Descripti ...
- spring在WEB中的应用。
1:创建IOC容器.在WEB应用程序启动的时候就创建.利用到监听器. ServletContextListener类的contextInitialized方法中 package com.struts2 ...
- linux下oracle数据库服务和监听的启动停止
oracle数据库是重量级的,其管理非常复杂,将其在linux平台上的启动和关闭步骤整理一下. 安装完毕oracle以后,需要创建oracle系统用户,并在/home/oracle下面的.bash_p ...
- hdu1059&poj1014 Dividing (dp,多重背包的二分优化)
Problem Description Marsha and Bill own a collection of marbles. They want to split the collection a ...
- OC学习篇之---KVC和KVO操作
前一篇文章我们介绍了OC中最常用的文件操作:http://blog.csdn.net/jiangwei0910410003/article/details/41875015,那么今天来看一下OC中的一 ...
- Kali Linux下运行nfc工具测试!
由于Kali本身就集成了很多nfc工具,用起来很方便,再加上一个acr122u读卡器,来尝试PJ学校水卡! 首先安装驱动,到龙杰官网下载Linux的,解压后进入自己Linux发行版,Kali的是Deb ...
- Branch policies on Azure Repos
https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies-overview?view=azure-devops B ...
- B2C网站的系统
管理系统 管理系统:主要做业务上的管理和内容输出,常见的有CMS(内容管理系统).CRM.SCM等, 1 供应商作为第三方,有独立开发的系统(SRM)和IO系统对接.以确定订单的状态.当然IO系统里面 ...