Kubernetes V1.15 二进制部署集群
1. 架构篇
1.1 kubernetes 架构说明
1.2 Flannel网络架构图
1.3 Kubernetes工作流程
2. 组件介绍
2.1 Master节点
- 2.1.1 、网关服务 API Server:提供Kubernetes API接口,主要处理REST操作以及更新ETCD中的对象。所有资源增删改查的唯一入口
- 只有API Server才直接操作etcd
- 其他模块通过API Server查询活修改数据
- 提供其他模块之间的数据交互和通信的枢纽
- 2.1.2、 调度器 Scheduler:资源调度,负责分配调度Pod到集群内的Node节点
- 监听kube-apiserver,查询还未分配Node的Pod
- 根据调度策略为这些Pod分配节点
- 2.1.3、 控制器 Controller Manager:所有其他群集级别的功能。目前由控制器Manager执行。资源对象的自动化控制中心。它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。
- 2.1.4、 存储 ETCD:所有持久化的状态信息存储在ETCD中
2.2 Node节点
- 2.2.1、Kubelet:管理Pods以及容器、镜像、Volume等,实现对集群对节点的管理。
- 2.2.2、Kube-proxy:提供网络代理以及负载均衡,实现与Service通信。
- 2.2.3、Docker:负责节点的容器管理工作
3.环境说明
3.1 部署节点说明
主机名 | IP | 用途 | 部署软件 |
---|---|---|---|
linux-node1 | 172.16.1.31 | master | apiserver,scheduler,controller-manager etcd,flanneld |
linux-node2 | 172.16.1.32 | node | kubelet,kube-proxy etcd,flanneld |
linux-node3 | 172.16.1.33 | node | kubelet,kube-proxy etcd,flanneld |
3.2 软件包版本
软件包 | 下载地址 |
---|---|
kubernetes-node-linux-amd64.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes-node-linux-amd64.tar.gz |
kubernetes-server-linux-amd64.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz |
kubernetes-client-linux-amd64.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz |
kubernetes.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes.tar.gz |
flannel-v0.11.0-linux-amd64.tar.gz | https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz |
cni-plugins-amd64-v0.7.1.tgz | https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz |
etcd-v3.2.18-linux-amd64.tar.gz | https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz |
4.Kubernetes 安装
4.1 初始化环境
- 4.1.、设置关闭防火墙及SELINUX,关闭swap
- systemctl stop firewalld && systemctl disable firewalld
- setenforce
- vi /etc/selinux/config
- SELINUX=disabled
- swapoff -a && sysctl -w vm.swappiness=
- sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- 4.1.、下载国内docker源,部署docker
- cd /etc/yum.repos.d/
- wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- yum clean all && yum repolist -y
- yum install -y docker-ce
- systemctl start docker
- 4.1.. 准备部署目录
- mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
- # scp -r /opt/kubernetes 172.16.1.32:/opt/
- # scp -r /opt/kubernetes 172.16.1.33:/opt/
- 4.1.、添加启动命令所在目录环境变量
- vim ~/.bash_profile
- # .bash_profile
- # Get the aliases and functions
- if [ -f ~/.bashrc ]; then
- . ~/.bashrc
- fi
- # User specific environment and startup programs
- PATH=$PATH:$HOME/bin:/opt/kubernetes/bin/
- export PATH
- source ~/.bash_profile
- # scp ~/.bash_profile 172.16.1.32:~/
- # scp ~/.bash_profile 172.16.1.33:~/
- 4.1.、配置内核参数【需重启服务器】
- cat /etc/sysctl.conf
- net.ipv6.conf.all.disable_ipv6 =
- net.ipv6.conf.default.disable_ipv6 =
- net.ipv6.conf.lo.disable_ipv6 =
- vm.swappiness =
- net.ipv4.neigh.default.gc_stale_time=
- net.ipv4.ip_forward =
- # see details in https://help.aliyun.com/knowledge_detail/39428.html
- net.ipv4.conf.all.rp_filter=
- net.ipv4.conf.default.rp_filter=
- net.ipv4.conf.default.arp_announce =
- net.ipv4.conf.lo.arp_announce=
- net.ipv4.conf.all.arp_announce=
- # see details in https://help.aliyun.com/knowledge_detail/41334.html
- net.ipv4.tcp_max_tw_buckets =
- net.ipv4.tcp_syncookies =
- net.ipv4.tcp_max_syn_backlog =
- net.ipv4.tcp_synack_retries =
- kernel.sysrq =
- # iptables透明网桥的实现
- net.bridge.bridge-nf-call-ip6tables =
- net.bridge.bridge-nf-call-iptables =
- net.bridge.bridge-nf-call-arptables =
4.2 安装制作CA证书工具【kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密】
- 4.2.. 安装CFSSL
- [root@linux-node1 ~]# cd /usr/local/src
- [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
- [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
- [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
- [root@linux-node1 src]# chmod +x cfssl*
- [root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
- [root@linux-node1 src]# mv cfssljson_linux-amd64 /opt/kubernetes/bin/cfssljson
- [root@linux-node1 src]# mv cfssl_linux-amd64 /opt/kubernetes/bin/cfssl
- #复制cfssl命令文件到k8s-node1和k8s-node2节点。如果实际中多个节点,就都需要同步复制。
- # scp /opt/kubernetes/bin/cfssl* 172.16.1.32:/opt/kubernetes/bin/
- # scp /opt/kubernetes/bin/cfssl* 172.16.1.33:/opt/kubernetes/bin/
- 4.2.. 生成模板文件
- [root@linux-node1 ~]# cd /usr/local/src
- [root@linux-node1 src]# mkdir ssl && cd ssl
- [root@linux-node1 ssl]# cfssl print-defaults config > config.json #默认证书生产策略配置模板
- [root@linux-node1 ssl]# cfssl print-defaults csr > csr.json #默认csr请求模板
- 4.2.. 创建用来生成CA文件的JSON配置文件
- [root@linux-node1 ~]# vim /usr/local/src/ssl/ca-config.json
- {
- "signing": {
- "default": {
- "expiry": "8760h"
- },
- "profiles": {
- "kubernetes": {
- "usages": [
- "signing",
- "key encipherment",
- "server auth",
- "client auth"
- ],
- "expiry": "8760h"
- }
- }
- }
- }
- 4.2..创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件
- [root@linux-node1 ~]# vim /usr/local/src/ssl/ca-csr.json
- {
- "CN": "kubernetes",
- "key": {
- "algo": "rsa",
- "size":
- },
- "names": [
- {
- "C": "CN",
- "ST": "BeiJing",
- "L": "BeiJing",
- "O": "k8s",
- "OU": "System"
- }
- ]
- }
- 4.2.. 生成CA证书(ca.pem)和秘钥(ca-key.pem)
- [root@linux-node1 ~]# cd /usr/local/src/ssl
- [root@ linux-node1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca #初始化创建CA认证中心,生成 ca-key.pem(私钥) ca.pem(公钥)
- [root@ linux-node1 ssl]# ls -l ca*
- -rw-r--r-- root root Mar : ca-config.json
- -rw-r--r-- root root Mar : ca.csr
- -rw-r--r-- root root Mar : ca-csr.json
- -rw------- root root Mar : ca-key.pem
- -rw-r--r-- root root Mar : ca.pem
- 4.2..分发证书
- [root@ linux-node1 ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
- SCP证书到k8s-node1和k8s-node2节点
- # scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.32:/opt/kubernetes/ssl
- # scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.33:/opt/kubernetes/ssl
4.3 部署ETCD集群
- 4.3.. 准备etcd软件包
- [root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
- [root@linux-node1 src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz
- [root@linux-node1 src]# cd etcd-v3.2.18-linux-amd64
- [root@linux-node1 etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/
- # scp etcd etcdctl 172.16.1.32:/opt/kubernetes/bin/
- # scp etcd etcdctl 172.16.1.33:/opt/kubernetes/bin/
- 4.3.. 创建etcd证书签名请求
- [root@linux-node1 src]# cd /usr/local/src
- [root@linux-node1 src]# vim /usr/local/src/etcd-csr.json
- {
- "CN": "etcd",
- "hosts": [
- "127.0.0.1",
- "172.16.1.31",
- "172.16.1.32",
- "172.16.1.33"
- ],
- "key": {
- "algo": "rsa",
- "size":
- },
- "names": [
- {
- "C": "CN",
- "ST": "BeiJing",
- "L": "BeiJing",
- "O": "k8s",
- "OU": "System"
- }
- ]
- }
- 4.3.. 生成etcd证书和私钥
- [root@linux-node1 ~]# cd /usr/local/src
- [root@linux-node1 src]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
- # 会生成以下证书文件
- [root@k8s-master src]# ls -l etcd*
- -rw-r--r-- root root Mar : etcd.csr
- -rw-r--r-- root root Mar : etcd-csr.json
- -rw------- root root Mar : etcd-key.pem
- -rw-r--r-- root root Mar : etcd.pem
- 4.3.. 将证书移动到/opt/kubernetes/ssl目录下
- [root@k8s-master src]# cp etcd*.pem /opt/kubernetes/ssl
- # scp etcd*.pem 172.16.1.32:/opt/kubernetes/ssl
- # scp etcd*.pem 172.16.1.33:/opt/kubernetes/ssl
- [root@linux-node1 src]# rm -f etcd.csr etcd-csr.json
- 4.3.. 设置etcd配置文件【etcd配置文件需手动创建生成】
- #其他节点 灰色背景标注 需要修改
- [root@linux-node1 ~]# vim /opt/kubernetes/cfg/etcd.conf
- #[member]
- ETCD_NAME="etcd-node1"
- ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
- #ETCD_SNAPSHOT_COUNTER=""
- #ETCD_HEARTBEAT_INTERVAL=""
- #ETCD_ELECTION_TIMEOUT=""
- ETCD_LISTEN_PEER_URLS="https://172.16.1.31:2380"
- ETCD_LISTEN_CLIENT_URLS="https://172.16.1.31:2379,https://127.0.0.1:2379"
- #ETCD_MAX_SNAPSHOTS=""
- #ETCD_MAX_WALS=""
- #ETCD_CORS=""
- #[cluster]
- ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.31:2380"
- # if you use different ETCD_NAME (e.g. test),
- # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
- ETCD_INITIAL_CLUSTER="etcd-node1=https://172.16.1.31:2380,etcd-node2=https://172.16.1.32:2380,etcd-node3=https://172.16.1.33:2380"
- ETCD_INITIAL_CLUSTER_STATE="new"
- ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
- ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.31:2379"
- #[security]
- CLIENT_CERT_AUTH="true"
- ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
- ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
- ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
- PEER_CLIENT_CERT_AUTH="true"
- ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
- ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
- ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
- 4.3.. 创建etcd系统服务
- [root@linux-node1 ~]# vim /etc/systemd/system/etcd.service
- [Unit]
- Description=Etcd Server
- After=network.target
- [Service]
- Type=simple
- WorkingDirectory=/var/lib/etcd
- EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
- # set GOMAXPROCS to number of processors
- ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
- Type=notify
- [Install]
- WantedBy=multi-user.target
- 4.3.. 重新加载系统服务
- [root@linux-node1 ~]# systemctl daemon-reload
- [root@linux-node1 ~]# systemctl enable etcd
- # scp /opt/kubernetes/cfg/etcd.conf 172.16.1.32:/opt/kubernetes/cfg/
- # scp /opt/kubernetes/cfg/etcd.conf 172.16.1.33:/opt/kubernetes/cfg/
- # scp /etc/systemd/system/etcd.service 172.16.1.32:/etc/systemd/system/
- # scp /etc/systemd/system/etcd.service 172.16.1.33:/etc/systemd/system/
- #在所有节点上创建etcd存储目录并启动etcd
- [root@linux-node1 ~]# mkdir /var/lib/etcd
- [root@linux-node1 ~]# systemctl start etcd
- [root@linux-node1 ~]# systemctl status etcd
- 4.3.. 验证集群
- [root@linux-node1 ~]# etcdctl --endpoints=https://172.16.1.31:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
- member 435fb0a8da627a4c is healthy: got healthy result from https://172.16.1.32:2379
- member 6566e06d7343e1bb is healthy: got healthy result from https://172.16.1.31:2379
- member ce7b884e428b6c8c is healthy: got healthy result from https://172.16.1.33:2379
- cluster is healthy
4.4 Master节点部署 【Kubernetes API服务】
- 4.4.1.1 【部署Kubernetes API服务部署】准备软件包
- [root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz #需要代理上网下载
- [root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-server-linux-amd64.tar.gz
- [root@linux-node1 ~]# cd /usr/local/src/kubernetes
- [root@linux-node1 kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/
- [root@linux-node1 kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/
- [root@linux-node1 kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/
- 4.4.1.2【部署Kubernetes API服务部署】创建生成CSR的 JSON 配置文件
- [root@linux-node1 src]# vim /usr/local/src/ssl/kubernetes-csr.json
- {
- "CN": "kubernetes",
- "hosts": [
- "127.0.0.1",
- "172.16.1.31",
- "10.1.0.1",
- "kubernetes",
- "kubernetes.default",
- "kubernetes.default.svc",
- "kubernetes.default.svc.cluster",
- "kubernetes.default.svc.cluster.local"
- ],
- "key": {
- "algo": "rsa",
- "size":
- },
- "names": [
- {
- "C": "CN",
- "ST": "BeiJing",
- "L": "BeiJing",
- "O": "k8s",
- "OU": "System"
- }
- ]
- }
- 4.4.1.3【部署Kubernetes API服务部署】生成 kubernetes 证书和私钥
- [root@linux-node1 ssl]# cd /usr/local/src/ssl/
- [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
- [root@linux-node1 src]# cp kubernetes*.pem /opt/kubernetes/ssl/
- # scp kubernetes*.pem 172.16.1.32:/opt/kubernetes/ssl/
- # scp kubernetes*.pem 172.16.1.33:/opt/kubernetes/ssl/
- 4.4.1.4【部署Kubernetes API服务部署】创建 kube-apiserver 使用的客户端 token 文件
- [root@linux-node1 ~]# head -c /dev/urandom | od -An -t x | tr -d ' '
- cebfb6641d0845bd61808e2337955ea0
- [root@linux-node1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv
- cebfb6641d0845bd61808e2337955ea0,kubelet-bootstrap,,"system:kubelet-bootstrap"
- 4.4.1.5【部署Kubernetes API服务部署】创建基础用户名/密码认证配置
- [root@linux-node1 ~]# vim /opt/kubernetes/ssl/basic-auth.csv
- admin,admin,
- readonly,readonly,
- 4.4.1.6【部署Kubernetes API服务部署】部署Kubernetes API Server (配置文件中指定service对外访问生成的随机端口范围)
- [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
- [Unit]
- Description=Kubernetes API Server
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- After=network.target
- [Service]
- ExecStart=/opt/kubernetes/bin/kube-apiserver \
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
- --bind-address=172.16.1.31 \
- --insecure-bind-address=127.0.0.1 \
- --authorization-mode=Node,RBAC \
- --runtime-config=rbac.authorization.k8s.io/v1 \
- --kubelet-https=true \
- --anonymous-auth=false \
- --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
- --enable-bootstrap-token-auth \
- --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
- --service-cluster-ip-range=10.1.0.0/ \
- --service-node-port-range=- \
- --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
- --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
- --client-ca-file=/opt/kubernetes/ssl/ca.pem \
- --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
- --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
- --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
- --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
- --etcd-servers=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 \
- --enable-swagger-ui=true \
- --allow-privileged=true \
- --audit-log-maxage= \
- --audit-log-maxbackup= \
- --audit-log-maxsize= \
- --audit-log-path=/opt/kubernetes/log/api-audit.log \
- --event-ttl=1h \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
- Type=notify
- LimitNOFILE=
- [Install]
- WantedBy=multi-user.target
- 4.4.1.7【部署Kubernetes API服务部署】启动API Server服务
- [root@linux-node1 ~]# systemctl daemon-reload
- [root@linux-node1 ~]# systemctl enable kube-apiserver
- [root@linux-node1 ~]# systemctl start kube-apiserver
- 4.4.1.8【部署Kubernetes API服务部署】查看API Server服务状态
- [root@linux-node1 ~]# systemctl status kube-apiserver
- 4.4.2.1【部署Controller Manager(控制服务)】配置Controller Manager
- [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
- [Unit]
- Description=Kubernetes Controller Manager
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- [Service]
- ExecStart=/opt/kubernetes/bin/kube-controller-manager \
- --address=127.0.0.1 \
- --master=http://127.0.0.1:8080 \
- --allocate-node-cidrs=true \
- --service-cluster-ip-range=10.1.0.0/ \
- --cluster-cidr=10.2.0.0/ \
- --cluster-name=kubernetes \
- --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
- --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
- --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
- --root-ca-file=/opt/kubernetes/ssl/ca.pem \
- --leader-elect=true \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
- [Install]
- WantedBy=multi-user.target
- 4.4.2.2【部署Controller Manager(控制服务)】启动Controller Manager
- [root@linux-node1 ~]# systemctl daemon-reload
- [root@linux-node1 scripts]# systemctl enable kube-controller-manager
- [root@linux-node1 scripts]# systemctl start kube-controller-manager
- 4.4.2.3【部署Controller Manager(控制服务)】查看服务状态
- [root@linux-node1 scripts]# systemctl status kube-controller-manager
- 4.4.3.1【部署Kubernetes Scheduler(调度服务)】配置Kubernetes Scheduler
- [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service
- [Unit]
- Description=Kubernetes Scheduler
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- [Service]
- ExecStart=/opt/kubernetes/bin/kube-scheduler \
- --address=127.0.0.1 \
- --master=http://127.0.0.1:8080 \
- --leader-elect=true \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
- [Install]
- WantedBy=multi-user.target
- 4.4.3.2【部署Kubernetes Scheduler(调度服务)】部署服务
- [root@linux-node1 ~]# systemctl daemon-reload
- [root@linux-node1 scripts]# systemctl enable kube-scheduler
- [root@linux-node1 scripts]# systemctl start kube-scheduler
- [root@linux-node1 scripts]# systemctl status kube-scheduler
- 4.4.3.3【部署kubectl 命令行工具】准备二进制命令包
- [root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz #需要代理上网下载
- [root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-client-linux-amd64.tar.gz
- [root@linux-node1 ~]# cd /usr/local/src/kubernetes/client/bin
- [root@linux-node1 bin]# cp kubectl /opt/kubernetes/bin/
- 4.4.3.4【部署kubectl 命令行工具】创建 admin 证书签名请求
- [root@linux-node1 ~]# cd /usr/local/src/ssl/
- [root@linux-node1 ssl]# vim admin-csr.json
- {
- "CN": "admin",
- "hosts": [],
- "key": {
- "algo": "rsa",
- "size":
- },
- "names": [
- {
- "C": "CN",
- "ST": "BeiJing",
- "L": "BeiJing",
- "O": "system:masters",
- "OU": "System"
- }
- ]
- }
- 4.4.3.5【部署kubectl 命令行工具】生成 admin 证书和私钥
- [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
- [root@linux-node1 ssl]# ls -l admin*
- -rw-r--r-- root root Mar : admin.csr
- -rw-r--r-- root root Mar : admin-csr.json
- -rw------- root root Mar : admin-key.pem
- -rw-r--r-- root root Mar : admin.pem
- [root@linux-node1 ssl]# mv admin*.pem /opt/kubernetes/ssl/
- 4.4.3.6【部署kubectl 命令行工具】设置集群参数
- [root@linux-node1 src]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443
- Cluster "kubernetes" set.
- 4.4.3.7【部署kubectl 命令行工具】设置客户端认证参数
- [root@linux-node1 src]# kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem
- User "admin" set.
- 4.4.3.8【部署kubectl 命令行工具】设置上下文参数
- [root@linux-node1 src]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin
- Context "kubernetes" created.
- 4.4.3.9【部署kubectl 命令行工具】设置默认上下文
- [root@linux-node1 src]# kubectl config use-context kubernetes
- Switched to context "kubernetes".
- 4.4.3.10【部署kubectl 命令行工具】使用kubectl工具(获取节点状态)
- [root@linux-node1 ~]# kubectl get cs
- NAME STATUS MESSAGE ERROR
- controller-manager Healthy ok
- scheduler Healthy ok
- etcd- Healthy {"health":"true"}
- etcd- Healthy {"health":"true"}
- etcd- Healthy {"health":"true"}
4.5 Node节点部署
- 4.5.1.1【部署kubelet】二进制包准备 将软件包从linux-node1复制到linux-node2中去。
- [root@linux-node1 bin]# cd /usr/local/src/kubernetes/server/bin/ && cp kubelet kube-proxy /opt/kubernetes/bin/
- # scp kubelet kube-proxy 172.16.1.32:/opt/kubernetes/bin/
- # scp kubelet kube-proxy 172.16.1.33:/opt/kubernetes/bin/
- 4.5.1.2【部署kubelet】创建角色绑定
- [root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
- clusterrolebinding "kubelet-bootstrap" created
- 4.5.1.3【部署kubelet】创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
- [root@linux-node1 ~]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443 --kubeconfig=bootstrap.kubeconfig
- Cluster "kubernetes" set.
- 4.5.1.4【部署kubelet】设置客户端认证参数
- [root@linux-node1 ~]# kubectl config set-credentials kubelet-bootstrap --token=cebfb6641d0845bd61808e2337955ea0 --kubeconfig=bootstrap.kubeconfig
- User "kubelet-bootstrap" set.
- 4.5.1.5【部署kubelet】设置上下文参数
- [root@linux-node1 ~]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
- Context "default" created.
- 4.5.1.6【部署kubelet】选择默认上下文
- [root@linux-node1 ~]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
- Switched to context "default".
- [root@linux-node1 kubernetes]# cp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig /opt/kubernetes/cfg
- # scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.32:/opt/kubernetes/cfg
- # scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.33:/opt/kubernetes/cfg
- 4.5.1.7【部署kubelet】部署kubelet .设置CNI支持
- [root@linux-node1 ~]# mkdir -p /etc/cni/net.d
- [root@linux-node1 ~]# vim /etc/cni/net.d/-default.conf
- {
- "name": "flannel",
- "type": "flannel",
- "delegate": {
- "bridge": "docker0",
- "isDefaultGateway": true,
- "mtu":
- }
- }
- # scp -r /etc/cni/net.d 172.16.1.32:/etc/cni/
- # scp -r /etc/cni/net.d 172.16.1.33:/etc/cni/
- 4.5.1.8【部署kubelet】创建kubelet目录
- [root@linux-node1 ~]# mkdir /var/lib/kubelet
- # scp -r /var/lib/kubelet 172.16.1.32:/var/lib/
- # scp -r /var/lib/kubelet 172.16.1.33:/var/lib/
- 4.5.1.9【部署kubelet】创建kubelet服务配置
- # 灰色部分需要修改
- [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kubelet.service
- [Unit]
- Description=Kubernetes Kubelet
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- After=docker.service
- Requires=docker.service
- [Service]
- WorkingDirectory=/var/lib/kubelet
- ExecStart=/opt/kubernetes/bin/kubelet \
- --address=172.16.1.31 \
- --hostname-override=172.16.1.31 \
- --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
- --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
- --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
- --cert-dir=/opt/kubernetes/ssl \
- --network-plugin=cni \
- --cni-conf-dir=/etc/cni/net.d \
- --cni-bin-dir=/opt/kubernetes/bin/cni \
- --cluster-dns=10.1.0.2 \
- --cluster-domain=cluster.local. \
- --hairpin-mode hairpin-veth \
- --allow-privileged=true \
- --fail-swap-on=false \
- --logtostderr=true \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
- # scp /usr/lib/systemd/system/kubelet.service 172.16.1.32:/usr/lib/systemd/system/
- # scp /usr/lib/systemd/system/kubelet.service 172.16.1.33:/usr/lib/systemd/system/
- 4.5.1.10【部署kubelet】启动Kubelet
- [root@linux-node2 ~]# systemctl daemon-reload
- [root@linux-node2 ~]# systemctl enable kubelet
- [root@linux-node2 ~]# systemctl start kubelet
- [root@linux-node3 ~]# systemctl daemon-reload
- [root@linux-node3 ~]# systemctl enable kubelet
- [root@linux-node3 ~]# systemctl start kubelet
- 4.5.1.11【部署kubelet】查看服务状态
- [root@linux-node2 kubernetes]# systemctl status kubelet
- 4.5.1.12 查看csr请求 注意是在linux-node1上执行。
- [root@linux-node1 ~]# kubectl get csr
- NAME AGE REQUESTOR CONDITION
- node-csr-0_w5F1FM_la_SeGiu3Y5xELRpYUjjT2icIFk9gO9KOU 1m kubelet-bootstrap Pending
- 4.5.1.13【部署kubelet】批准kubelet 的 TLS 证书请求
- [root@linux-node1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
- certificatesigningrequest.certificates.k8s.io "node-csr-QCgiejwSx_bPgcBLNxHkMHs-lzNAY-bJNgm4skUMqII" approved
- 执行完毕后,查看节点状态已经是Ready的状态了
- [root@linux-node1 ssl]# kubectl get node
- NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
- 172.16.1.32 Ready <none> 10m v1.10.1 <none> CentOS Linux (Core) 3.10.-.el7.x86_64 docker://19.3.5
- 172.16.1.33 Ready <none> 10m v1.10.1 <none> CentOS Linux (Core) 3.10.-.el7.x86_64 docker://19.3.5
- 4.5.2.1【部署Kubernetes Proxy】配置kube-proxy使用LVS
- [root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
- 4.5.2.2【部署Kubernetes Proxy】创建 kube-proxy 证书请求
- [root@linux-node1 ~]# cd /usr/local/src/ssl/
- [root@linux-node1 ssl]# vim kube-proxy-csr.json
- {
- "CN": "system:kube-proxy",
- "hosts": [],
- "key": {
- "algo": "rsa",
- "size":
- },
- "names": [
- {
- "C": "CN",
- "ST": "BeiJing",
- "L": "BeiJing",
- "O": "k8s",
- "OU": "System"
- }
- ]
- }
- 4.5.2.3【部署Kubernetes Proxy】生成证书
- [root@linux-node1ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
- 4.5.2.4【部署Kubernetes Proxy】分发证书到所有Node节点
- [root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
- # scp kube-proxy*.pem 172.16.1.32:/opt/kubernetes/ssl/
- # scp kube-proxy*.pem 172.16.1.33:/opt/kubernetes/ssl/
- 4.5.2.5【部署Kubernetes Proxy】创建kube-proxy配置文件
- [root@linux-node1 ssl]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443 --kubeconfig=kube-proxy.kubeconfig
- Cluster "kubernetes" set.
- [root@linux-node1 ssl]# kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
- User "kube-proxy" set.
- [root@linux-node1 ssl]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
- Context "default" created.
- [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
- Switched to context "default".
- 4.5.2.6【部署Kubernetes Proxy】分发kubeconfig配置文件
- [root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
- # scp kube-proxy.kubeconfig 172.16.1.32:/opt/kubernetes/cfg/
- # scp kube-proxy.kubeconfig 172.16.1.33:/opt/kubernetes/cfg/
- 4.5.2.7【部署Kubernetes Proxy】创建kube-proxy服务配置
- [root@linux-node1 ~]# mkdir /var/lib/kube-proxy
- # scp -r /var/lib/kube-proxy 172.16.1.32:/var/lib/
- # scp -r /var/lib/kube-proxy 172.16.1.33:/var/lib/
- #各节点灰色部分 需要修改
- [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
- [Unit]
- Description=Kubernetes Kube-Proxy Server
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- After=network.target
- [Service]
- WorkingDirectory=/var/lib/kube-proxy
- ExecStart=/opt/kubernetes/bin/kube-proxy \
- --bind-address=172.16.1.31 \
- --hostname-override=172.16.1.31 \
- --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
- --masquerade-all \
- --feature-gates=SupportIPVSProxyMode=true \
- --proxy-mode=ipvs \
- --ipvs-min-sync-period=5s \
- --ipvs-sync-period=5s \
- --ipvs-scheduler=rr \
- --logtostderr=true \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
- LimitNOFILE=
- [Install]
- WantedBy=multi-user.target
- # scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.32:/usr/lib/systemd/system/
- # scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.33:/usr/lib/systemd/system/
- 4.5.2.8【部署Kubernetes Proxy】启动Kubernetes Proxy(**Node节点启动)
- [root@linux-node2 ~]# systemctl daemon-reload
- [root@linux-node2 ~]# systemctl enable kube-proxy
- [root@linux-node2 ~]# systemctl start kube-proxy
- [root@linux-node3 ~]# systemctl daemon-reload
- [root@linux-node3 ~]# systemctl enable kube-proxy
- [root@linux-node3 ~]# systemctl start kube-proxy
- 4.5.2.9【部署Kubernetes Proxy】查看服务状态 查看kube-proxy服务状态
- [root@linux-node2 scripts]# systemctl status kube-proxy
- 检查LVS状态
- [root@linux-node2 ~]# ipvsadm -L -n
- IP Virtual Server version 1.2. (size=)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 10.1.0.1: rr persistent
- -> 172.16.1.31: Masq
- 如果你在两台实验机器都安装了kubelet和proxy服务,使用下面的命令可以检查状态:
- [root@linux-node1 ssl]# kubectl get node
- NAME STATUS ROLES AGE VERSION
- 172.16.1.32 Ready <none> 22m v1.10.1
- 172.16.1.33 Ready <none> 3m v1.10.1
4.6 flanal网络部署
- 4.6.1 为Flannel创建证书
- [root@linux-node1 ~]#cd /usr/local/src/ssl
- [root@linux-node1 ssl]# vim flanneld-csr.json
- {
- "CN": "flanneld",
- "hosts": [],
- "key": {
- "algo": "rsa",
- "size": 2048
- },
- "names": [
- {
- "C": "CN",
- "ST": "BeiJing",
- "L": "BeiJing",
- "O": "k8s",
- "OU": "System"
- }
- ]
- }
- 4.6.2 生成证书
- [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
- [root@linux-node1 ssl]# ls flanneld*.pem
- flanneld-key.pem flanneld.pem
- [root@linux-node1 ssl]# ls -l flanneld*.pem
- -rw------- 1 root root 1675 Dec 27 18:55 flanneld-key.pem
- -rw-r--r-- 1 root root 1391 Dec 27 18:55 flanneld.pem
- 4.6.3 分发证书
- [root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
- # scp flanneld*.pem 172.16.1.32:/opt/kubernetes/ssl/
- # scp flanneld*.pem 172.16.1.33:/opt/kubernetes/ssl/
- 4.6.4 下载Flannel软件包
- [root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
- [root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz
- [root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
- #复制到linux-node2节点
- # scp flanneld mk-docker-opts.sh 172.16.1.32:/opt/kubernetes/bin/
- # scp flanneld mk-docker-opts.sh 172.16.1.33:/opt/kubernetes/bin/
- #复制对应脚本到/opt/kubernetes/bin目录下。
[root@linux-node1 ~]# wget https://dl.k8s.io/v1.10.1/kubernetes.tar.gz #需要代理上网下载此包- [root@linux-node1 ~]# tar xf kubernetes.tar.gz -C /usr/local/src/ && cd /usr/local/src/kubernetes/cluster/centos/node/bin/
- [root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/
- # scp remove-docker0.sh 172.16.1.32:/opt/kubernetes/bin/
- # scp remove-docker0.sh 172.16.1.33:/opt/kubernetes/bin/
- 4.6.5 配置Flannel
- [root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel
- FLANNEL_ETCD="-etcd-endpoints=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379"
- FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
- FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
- FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
- FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
- #复制配置到其它节点上
- # scp /opt/kubernetes/cfg/flannel 172.16.1.32:/opt/kubernetes/cfg/
- # scp /opt/kubernetes/cfg/flannel 172.16.1.33:/opt/kubernetes/cfg/
- 4.6.6 设置Flannel系统服务
- [root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service
- [Unit]
- Description=Flanneld overlay address etcd agent
- After=network.target
- Before=docker.service
- [Service]
- EnvironmentFile=-/opt/kubernetes/cfg/flannel
- ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
- ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
- ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker
- Type=notify
- [Install]
- WantedBy=multi-user.target
- RequiredBy=docker.service
- 复制系统服务脚本到其它节点上
- # scp /usr/lib/systemd/system/flannel.service 172.16.1.32:/usr/lib/systemd/system/
- # scp /usr/lib/systemd/system/flannel.service 172.16.1.33:/usr/lib/systemd/system/
- 4.6.7【Flannel CNI集成】下载CNI插件
- [root@linux-node1 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
- [root@linux-node1 ~]# mkdir /opt/kubernetes/bin/cni
- [root@linux-node1 ~]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
- # scp -r /opt/kubernetes/bin/cni 172.16.1.32:/opt/kubernetes/bin/
- # scp -r /opt/kubernetes/bin/cni 172.16.1.33:/opt/kubernetes/bin/
- 4.6.8【Flannel CNI集成】创建Etcd的key
- [root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem --no-sync -C https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1
- 4.6.9【Flannel CNI集成】启动flannel (所有节点都启动)
- [root@linux-node1 ~]# systemctl daemon-reload
- [root@linux-node1 ~]# systemctl enable flannel
- [root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*
- [root@linux-node1 ~]# systemctl start flannel
- 4.6.10【Flannel CNI集成】查看服务状态
- [root@linux-node1 ~]# systemctl status flannel
- 4.6.11【Flannel CNI集成】配置Docker使用Flannel
- [root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service
- [Unit] #在Unit下面修改After和增加Requires
- After=network-online.target firewalld.service flannel.service
- Wants=network-online.target
- Requires=flannel.service #docker启动 依赖flannel网络
- [Service] #增加EnvironmentFile=-/run/flannel/docker
- Type=notify
- EnvironmentFile=-/run/flannel/docker
- ExecStart=/usr/bin/dockerd $DOCKER_OPTS
- #将配置复制到另外两个节点
- # scp /usr/lib/systemd/system/docker.service 172.16.1.32:/usr/lib/systemd/system/
- # scp /usr/lib/systemd/system/docker.service 172.16.1.33:/usr/lib/systemd/system/
- 4.6.12【Flannel CNI集成】重启Docker (所有节点重启)
- [root@linux-node1 ~]# systemctl daemon-reload
- [root@linux-node1 ~]# systemctl restart docker
4.7 CoreDNS部署
- 4.7. 编写corDNS yaml文件
- [root@linux-node1 ~]# vim coredns.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: coredns
- namespace: kube-system
- labels:
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- labels:
- kubernetes.io/bootstrapping: rbac-defaults
- addonmanager.kubernetes.io/mode: Reconcile
- name: system:coredns
- rules:
- - apiGroups:
- - ""
- resources:
- - endpoints
- - services
- - pods
- - namespaces
- verbs:
- - list
- - watch
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- annotations:
- rbac.authorization.kubernetes.io/autoupdate: "true"
- labels:
- kubernetes.io/bootstrapping: rbac-defaults
- addonmanager.kubernetes.io/mode: EnsureExists
- name: system:coredns
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: system:coredns
- subjects:
- - kind: ServiceAccount
- name: coredns
- namespace: kube-system
- ---
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: coredns
- namespace: kube-system
- labels:
- addonmanager.kubernetes.io/mode: EnsureExists
- data:
- Corefile: |
- .: {
- errors
- health
- kubernetes cluster.local. in-addr.arpa ip6.arpa {
- pods insecure
- upstream
- fallthrough in-addr.arpa ip6.arpa
- }
- prometheus :
- proxy . /etc/resolv.conf
- cache
- }
- ---
- apiVersion: extensions/v1beta1
- kind: Deployment
- metadata:
- name: coredns
- namespace: kube-system
- labels:
- k8s-app: coredns
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- kubernetes.io/name: "CoreDNS"
- spec:
- replicas:
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable:
- selector:
- matchLabels:
- k8s-app: coredns
- template:
- metadata:
- labels:
- k8s-app: coredns
- spec:
- serviceAccountName: coredns
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- - key: "CriticalAddonsOnly"
- operator: "Exists"
- containers:
- - name: coredns
- image: coredns/coredns:1.0.
- imagePullPolicy: IfNotPresent
- resources:
- limits:
- memory: 170Mi
- requests:
- cpu: 100m
- memory: 70Mi
- args: [ "-conf", "/etc/coredns/Corefile" ]
- volumeMounts:
- - name: config-volume
- mountPath: /etc/coredns
- ports:
- - containerPort:
- name: dns
- protocol: UDP
- - containerPort:
- name: dns-tcp
- protocol: TCP
- livenessProbe:
- httpGet:
- path: /health
- port:
- scheme: HTTP
- initialDelaySeconds:
- timeoutSeconds:
- successThreshold:
- failureThreshold:
- dnsPolicy: Default
- volumes:
- - name: config-volume
- configMap:
- name: coredns
- items:
- - key: Corefile
- path: Corefile
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: coredns
- namespace: kube-system
- labels:
- k8s-app: coredns
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- kubernetes.io/name: "CoreDNS"
- spec:
- selector:
- k8s-app: coredns
- clusterIP: 10.1.0.2
- ports:
- - name: dns
- port:
- protocol: UDP
- - name: dns-tcp
- port:
- protocol: TCP
- 4.7. 部署coredns
- [root@linux-node1 ~]# kubectl create -f coredns.yaml
- 4.7. 测试DNS是否配置成功
- [root@linux-node1 ~]# kubectl run dns-test --rm -it --image=alpine /bin/sh
- If you don't see a command prompt, try pressing enter.
- / # ping www.baidu.com -c
- PING www.baidu.com (61.135.169.125): data bytes
- bytes from 61.135.169.125: seq= ttl= time=5.718 ms
- bytes from 61.135.169.125: seq= ttl= time=5.695 ms
- --- www.baidu.com ping statistics ---
- packets transmitted, packets received, % packet loss
- round-trip min/avg/max = 5.695/5.706/5.718 ms
- / #
4.8 dashboard部署
- 4.8. 创建dashboard yaml存放目录【自定义创建】
- [root@linux-node1 ~]# mkdir -p /root/dashboard_yaml_dir
- 4.8. 编写admin-user-sa-rbac.yaml文件
- [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/admin-user-sa-rbac.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: admin-user
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: admin-user
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: cluster-admin
- subjects:
- - kind: ServiceAccount
- name: admin-user
- namespace: kube-system
- 4.8. 编写kubernetes-dashboard.yaml文件
- [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/kubernetes-dashboard.yaml
- # Copyright The Kubernetes Authors.
- #
- # Licensed under the Apache License, Version 2.0 (the "License");
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- # Configuration to deploy release version of the Dashboard UI compatible with
- # Kubernetes 1.8.
- #
- # Example usage: kubectl create -f <this_file>
- # ------------------- Dashboard Secret ------------------- #
- apiVersion: v1
- kind: Secret
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-certs
- namespace: kube-system
- type: Opaque
- ---
- # ------------------- Dashboard Service Account ------------------- #
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kube-system
- ---
- # ------------------- Dashboard Role & Role Binding ------------------- #
- kind: Role
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: kubernetes-dashboard-minimal
- namespace: kube-system
- rules:
- # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- - apiGroups: [""]
- resources: ["secrets"]
- verbs: ["create"]
- # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- - apiGroups: [""]
- resources: ["configmaps"]
- verbs: ["create"]
- # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- - apiGroups: [""]
- resources: ["secrets"]
- resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
- verbs: ["get", "update", "delete"]
- # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- - apiGroups: [""]
- resources: ["configmaps"]
- resourceNames: ["kubernetes-dashboard-settings"]
- verbs: ["get", "update"]
- # Allow Dashboard to get metrics from heapster.
- - apiGroups: [""]
- resources: ["services"]
- resourceNames: ["heapster"]
- verbs: ["proxy"]
- - apiGroups: [""]
- resources: ["services/proxy"]
- resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
- verbs: ["get"]
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- name: kubernetes-dashboard-minimal
- namespace: kube-system
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: Role
- name: kubernetes-dashboard-minimal
- subjects:
- - kind: ServiceAccount
- name: kubernetes-dashboard
- namespace: kube-system
- ---
- # ------------------- Dashboard Deployment ------------------- #
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kube-system
- spec:
- replicas:
- revisionHistoryLimit:
- selector:
- matchLabels:
- k8s-app: kubernetes-dashboard
- template:
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- spec:
- containers:
- - name: kubernetes-dashboard
- #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
- image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3
- ports:
- - containerPort:
- protocol: TCP
- args:
- - --auto-generate-certificates
- # Uncomment the following line to manually specify Kubernetes API server Host
- # If not specified, Dashboard will attempt to auto discover the API server and connect
- # to it. Uncomment only if the default does not work.
- # - --apiserver-host=http://my-address:port
- volumeMounts:
- - name: kubernetes-dashboard-certs
- mountPath: /certs
- # Create on-disk volume to store exec logs
- - mountPath: /tmp
- name: tmp-volume
- livenessProbe:
- httpGet:
- scheme: HTTPS
- path: /
- port:
- initialDelaySeconds:
- timeoutSeconds:
- volumes:
- - name: kubernetes-dashboard-certs
- secret:
- secretName: kubernetes-dashboard-certs
- - name: tmp-volume
- emptyDir: {}
- serviceAccountName: kubernetes-dashboard
- # Comment the following tolerations if Dashboard must not be deployed on master
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- ---
- # ------------------- Dashboard Service ------------------- #
- kind: Service
- apiVersion: v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
- name: kubernetes-dashboard
- namespace: kube-system
- spec:
- type: NodePort
- ports:
- - port:
- targetPort:
- nodePort:
- selector:
- k8s-app: kubernetes-dashboard
- type: NodePort
- 4.8. 编写ui-admin-rbac.yaml文件
- [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-admin-rbac.yaml
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: ui-admin
- rules:
- - apiGroups:
- - ""
- resources:
- - services
- - services/proxy
- verbs:
- - '*'
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- name: ui-admin-binding
- namespace: kube-system
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: ui-admin
- subjects:
- - apiGroup: rbac.authorization.k8s.io
- kind: User
- name: admin
- 4.8. 编写ui-read-rbac.yaml文件
- [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-read-rbac.yaml
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: ui-read
- rules:
- - apiGroups:
- - ""
- resources:
- - services
- - services/proxy
- verbs:
- - get
- - list
- - watch
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- name: ui-read-binding
- namespace: kube-system
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: ui-read
- subjects:
- - apiGroup: rbac.authorization.k8s.io
- kind: User
- name: readonly
- 4.8.6 创建Dashboard
- [root@linux-node1 ~]# kubectl create -f /root/dashboard_yaml_dir/
- [root@linux-node1 ~]# kubectl cluster-info
- Kubernetes master is running at https://172.16.1.31:6443
- kubernetes-dashboard is running at
- https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
- To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
- 4.8. 访问Dashboard
- https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
- 用户名:admin 密码:admin 选择Token令牌模式登录。
- 4.8. 获取Token
- kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Kubernetes V1.15 二进制部署集群的更多相关文章
- Kubernetes v1.12/v1.13 二进制部署集群(HTTPS+RBAC)
官方提供的几种Kubernetes部署方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环 ...
- linux运维、架构之路-Kubernetes离线、二进制部署集群
一.Kubernetes对应Docker的版本支持列表 Kubernetes 1.9 <--Docker 1.11.2 to 1.13.1 and 17.03.x Kubernetes 1.8 ...
- [转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群
CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146 一.概述 kubernetes 1.13 ...
- 2、kubeadm快速部署kubernetes(v1.15.0)集群190623
一.网络规划 节点网络:192.168.100.0/24 Service网络:10.96.0.0/12 Pod网络(默认):10.244.0.0/16 二.组件分布及节点规划 master(192.1 ...
- 基于Kubernetes v1.24.0的集群搭建(二)
上一篇文章主要是介绍了,每台虚拟机的环境配置.接下来我们开始有关K8S的相关部署. 另外补充一下上一篇文章中的K8S的changelog链接: https://github.com/kubernet ...
- 基于Kubernetes v1.24.0的集群搭建(三)
1 使用kubeadm部署Kubernetes 如无特殊说明,以下操作可以在所有节点上进行. 1.1 首先我们需要配置一下阿里源 cat <<EOF > /etc/yum.repos ...
- 基于Kubernetes v1.24.0的集群搭建(一)
一.写在前面 K8S 1.24作为一个很重要的版本更新,它为我们提供了很多重要功能.该版本涉及46项增强功能:其中14项已升级为稳定版,15项进入beta阶段,13项则刚刚进入alpha阶段.此外,另 ...
- Kubernetes学习之路(26)之kubeasz+ansible部署集群
目录 1.环境说明 2.准备工作 3.分步骤安装 3.1.创建证书和安装准备 3.2.安装etcd集群 3.3.安装docker 3.4.安装master节点 3.5.安装node节点 3.6.部署集 ...
- Kubernetes学习之路(八)之Kubeadm部署集群
一.环境说明 节点名称 ip地址 部署说明 Pod 网段 Service网段 系统说明 k8s-master 192.168.56.11 docker.kubeadm.kubectl.kubelet ...
随机推荐
- Mysql安装、配置、优化
MYSQL定义 MySQL是一个关系型数据库管理系统,由瑞典MySQL AB 公司开发,属于 Oracle旗下产品.MySQL 是最流行的关系型数据库管理系统之一,在 WEB 应用方面,MySQL是最 ...
- redhat 6.5 更换yum源
新安装了redhat6.5.安装后,登录系统,使用yum update 更新系统.提示: Loaded plugins: product-id, security, subscription-mana ...
- Spring学习的第一天
Spring是以Ioc和Aop为内核,提供了表现层spring MVC 和持久层Spring JDBC等众多应用技术,还能整合开源世界众多著名的第三方框架和类库,成为使用最多的JavaEE企业应用开源 ...
- Python网络爬虫_Scrapy框架_2.logging模块的使用
logging模块提供日志服务 在scrapy框架中已经对其进行一些操作所以使用更为简单 在Scrapy框架中使用: 1.在setting.py文件中设置LOG_LEVEL(设置日志等级,只有高于等于 ...
- Python常用的正则表达式处理函数
Python常用的正则表达式处理函数 正则表达式是一个特殊的字符序列,用于简洁表达一组字符串特征,检查一个字符串是否与某种模式匹配,使用起来十分方便. 在Python中,我们通过调用re库来使用re模 ...
- RN调试坑点总结(不定期更新)
前言 我感觉,如果模拟器是个人的话,我已经想打死他了 大家不要催我学flutter啦,哈哈哈,学了后跟大家分享下 RN报错的终极解决办法 众所周知,RN经常遇到无可奈何的超级Bug, 那么对于这些问题 ...
- [译]Vulkan教程(09)窗口表面
[译]Vulkan教程(09)窗口表面 Since Vulkan is a platform agnostic API, it can not interface directly with the ...
- IDEA生成可执行的jar文件
场景 用IDEA开发一个Java控制台程序,项目完成后,打包给客户使用. 做法 首先用IDEA打开要生成jar的项目,打开后选择File->Project Structure... 选择Arti ...
- redis缓存穿透,缓存击穿,缓存雪崩
概念解释 redis 缓存穿透 key对应的数据在数据源并不存在,每次针对此key的请求从缓存获取不到,请求都会到数据源,从而可能压垮数据源.比如用一个不存在的用户id获取用户信息,不论缓存还是数据库 ...
- 使用opencv和numpy实现矩阵相乘和按元素相乘 matrix multiplication vs element-wise multiplication
本文首发于个人博客https://kezunlin.me/post/1e37a6/,欢迎阅读最新内容! opencv and numpy matrix multiplication vs elemen ...