Kubernetes容器集群管理

Kubernetes介绍

Kubernetes是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,Kubernetes也叫K8S。
K8S是Google内部一个叫Borg的容器集群管理系统衍生出来的,Borg已经在Google大规模生产运行十年之久。
K8S主要用于自动化部署、扩展和管理容器应用,提供了资源调度、部署管理、服务发现、扩容缩容、监控等一整套功能。
2015年7月,Kubernetes v1.0正式发布。
Kubernetes目标是让部署容器化应用简单高效。
官方网站:www.kubernetes.io

Kubernetes 主要功能

  • 数据卷

Pod中容器之间共享数据,可以使用数据卷。

  • 应用程序健康检查

容器内服务可能进程堵塞无法处理请求,可以设置监控检查策略保证应用健壮性。

  • 复制应用程序实例

控制器维护着Pod副本数量,保证一个Pod或一组同类的Pod数量始终可用。

  • 弹性伸缩

根据设定的指标(CPU利用率)自动缩放Pod副本数。

  • 服务发现

使用环境变量或DNS服务插件保证容器中程序发现Pod入口访问地址。

  • 负载均衡

一组Pod副本分配一个私有的集群IP地址,负载均衡转发请求到后端容器。在集群内部其他Pod可通过这个ClusterIP访问应用。

  • 滚动更新

更新服务不中断,一次更新一个Pod,而不是同时删除整个服务。

  • 服务编排

通过文件描述部署服务,使得应用程序部署变得更高效。

  • 资源监控

Node节点组件集成cAdvisor资源收集工具,可通过Heapster汇总整个集群节点资源数据,然后存储到InfluxDB时序数据库,再由Grafana展示。

  • 提供认证和授权

支持角色访问控制(RBAC)认证授权等策略。

基本对象概念

基本对象:

  • Pod

Pod是最小部署单元,一个Pod有一个或多个容器组成,Pod中容器共享存储和网络,在同一台Docker主机上运行。

  • Service

Service一个应用服务抽象,定义了Pod逻辑集合和访问这个Pod集合的策略。
Service代理Pod集合对外表现是为一个访问入口,分配一个集群IP地址,来自这个IP的请求将负载均衡转发后端Pod中的容器。
Service通过Lable Selector选择一组Pod提供服务。

  • Volume

数据卷,共享Pod中容器使用的数据。

  • Namespace

命名空间将对象逻辑上分配到不同Namespace,可以是不同的项目、用户等区分管理,并设定控制策略,从而实现多租户。
命名空间也称为虚拟集群。

  • Lable

标签用于区分对象(比如Pod、Service),键/值对存在;每个对象可以有多个标签,通过标签关联对象。

基于基本对象更高层次抽象:

  • ReplicaSet

下一代Replication Controller。确保任何给定时间指定的Pod副本数量,并提供声明式更新等功能。
RC与RS唯一区别就是lable selector支持不同,RS支持新的基于集合的标签,RC仅支持基于等式的标签。

  • Deployment

Deployment是一个更高层次的API对象,它管理ReplicaSets和Pod,并提供声明式更新等功能。
官方建议使用Deployment管理ReplicaSets,而不是直接使用ReplicaSets,这就意味着可能永远不需要直接操作ReplicaSet对象。

  • StatefulSet

StatefulSet适合持久性的应用程序,有唯一的网络标识符(IP),持久存储,有序的部署、扩展、删除和滚动更新。

  • DaemonSet

DaemonSet确保所有(或一些)节点运行同一个Pod。当节点加入Kubernetes集群中,Pod会被调度到该节点上运行,当节点从集群中
移除时,DaemonSet的Pod会被删除。删除DaemonSet会清理它所有创建的Pod。

  • Job

一次性任务,运行完成后Pod销毁,不再重新启动新容器。还可以任务定时运行。

系统架构图及组件功能


Master 组件:

  • kube- - apiserver

Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。

  • kube- - controller- - manager

处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。

  • kube- - scheduler

根据调度算法为新创建的Pod选择一个Node节点。

Node 组件:

  • kubelet

kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、
下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。

  • kube- - proxy

在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。

  • docker 或 rocket/rkt

运行容器。
第三方服务:

  • etcd

分布式键值存储系统。用于保持集群状态,比如Pod、Service等对象信息。

下图清晰表明了Kubernetes的架构设计以及组件之间的通信协议。

好了,不BB!。。。

集群部署
1、环境规划
2、安装Docker
3、自签TLS证书
4、部署Etcd集群
5、部署Flannel网络
6、创建Node节点kubeconfig文件
7、获取K8S二进制包
8、运行Master组件
9、运行Node组件
10、查询集群状态
11、启动一个测试示例
12、部署Web UI (Dashboard)
Kubernetes容器集群管理

集群部署 – 环境规划
角色  IP  组件  推荐配置
master  192.168.247.211

kube-apiserver
kube-controller-manager
kube-scheduler
etcd

CPU 2核+
2G内存+
node01  192.168.247.212 kubelet
kube-proxy
docker
flannel
etcd
node02 192.168.247.213 kubelet
kube-proxy
docker
flannel
etcd
软件版本信息
软件  版本
Linux操作系统 CentOS7.4_x64
Kubernetes  1.11.7
Docker  17.12-ce
Etcd  3.0

Kubernetes发布地址:https://github.com/kubernetes/kubernetes/releases

系统环境准备

  1. cat <<EOF >>/etc/hosts
  2. 192.168.247.211 master
  3. 192.168.247.212 node01
  4. 192.168.247.213 node02
  5. EOF
  6. systemctl stop firewalld
  7. systemctl disable firewalld
  8. sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  9. swapoff -a
  10. sed -i 's/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g' /etc/fstab
  11. yum -y install ntp
  12. systemctl enable ntpd
  13. systemctl start ntpd
  14. ntpdate -u cn.pool.ntp.org
  15. hwclock --systohc
  16. timedatectl set-timezone Asia/Shanghai
  17. yum install wget vim lsof net-tools lrzsz -y
  18. curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  19. wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
  20. yum makecache
  21. #设置内核参数
  22. echo "* soft nofile 190000" >> /etc/security/limits.conf
  23. echo "* hard nofile 200000" >> /etc/security/limits.conf
  24. echo "* soft nproc 252144" >> /etc/security/limits.conf
  25. echo "* hadr nproc 262144" >> /etc/security/limits.conf
  26. tee /etc/sysctl.conf <<-'EOF'
  27. # System default settings live in /usr/lib/sysctl.d/00-system.conf.
  28. # To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
  29. #
  30. # For more information, see sysctl.conf(5) and sysctl.d(5).
  31.  
  32. net.ipv4.tcp_tw_recycle = 0
  33. net.ipv4.ip_local_port_range = 10000 61000
  34. net.ipv4.tcp_syncookies = 1
  35. net.ipv4.tcp_fin_timeout = 30
  36. net.ipv4.ip_forward = 1
  37. net.core.netdev_max_backlog = 2000
  38. net.ipv4.tcp_mem = 131072 262144 524288
  39. net.ipv4.tcp_keepalive_intvl = 30
  40. net.ipv4.tcp_keepalive_probes = 3
  41. net.ipv4.tcp_window_scaling = 1
  42. net.ipv4.tcp_syncookies = 1
  43. net.ipv4.tcp_max_syn_backlog = 2048
  44. net.ipv4.tcp_low_latency = 0
  45. net.core.rmem_default = 256960
  46. net.core.rmem_max = 513920
  47. net.core.wmem_default = 256960
  48. net.core.wmem_max = 513920
  49. net.core.somaxconn = 2048
  50. net.core.optmem_max = 81920
  51. net.ipv4.tcp_mem = 131072 262144 524288
  52. net.ipv4.tcp_rmem = 8760 256960 4088000
  53. net.ipv4.tcp_wmem = 8760 256960 4088000
  54. net.ipv4.tcp_keepalive_time = 1800
  55. net.ipv4.tcp_sack = 1
  56. net.ipv4.tcp_fack = 1
  57. net.ipv4.tcp_timestamps = 1
  58. net.ipv4.tcp_syn_retries = 1
  59. EOF
  60. cat > /etc/sysctl.d/k8s.conf << EOF
  61. net.bridge.bridge-nf-call-ip6tables = 1
  62. net.bridge.bridge-nf-call-iptables = 1
  63. EOF
  64. sysctl --system
  65. sysctl -p
  66. reboot

集群部署 – 安装Docker

  1. # step 1: 安装必要的一些系统工具
  2. yum install -y yum-utils device-mapper-persistent-data lvm2 unzip
  3. # Step 2: 添加软件源信息
  4. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  5. # Step 3: 更新并安装 Docker-CE
  6. yum makecache fast
  7. yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
  8. yum install docker-ce-17.03.2.ce-1.el7.centos -y
  9. # Step 4: 开启Docker服务
  10. service docker start
  11. systemctl enable docker

# 注意:

  1. # 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,你可以通过以下方式开启。同理可以开启各种测试版本等。
  2. # vim /etc/yum.repos.d/docker-ce.repo
  3. # 将 [docker-ce-test] 下方的 enabled=0 修改为 enabled=1
  4. #
  5. # 安装指定版本的Docker-CE:
  6. # Step 1: 查找Docker-CE的版本:
  7. # yum list docker-ce.x86_64 --showduplicates | sort -r
  8. # Loading mirror speeds from cached hostfile
  9. # Loaded plugins: branch, fastestmirror, langpacks
  10. # docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
  11. # docker-ce.x86_64 17.03.1.ce-1.el7.centos @docker-ce-stable
  12. # docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
  13. # Available Packages
  14. # Step2 : 安装指定版本的Docker-CE: (VERSION 例如上面的 17.03.0.ce.1-1.el7.centos)
  15. # sudo yum -y install docker-ce-[VERSION]
  16.  
  17. # 通过经典网络、VPC网络内网安装时,用以下命令替换Step 2中的命令
  18. # 经典网络:
  19. # sudo yum-config-manager --add-repo http://mirrors.aliyuncs.com/docker-ce/linux/centos/docker-ce.repo
  20. # VPC网络:
  21. # sudo yum-config-manager --add-repo http://mirrors.could.aliyuncs.com/docker-ce/linux/centos/docker-ce.repo
  22.  
  23. #设置加速器
  24. cat << EOF > /etc/docker/daemon.json
  25. {
  26. "registry-mirrors": [ "https://registry.docker-cn.com"],
  27. "insecure-registries":["192.168.247.210:5000"]
  28. }
  29. EOF

集群部署 – 自签TLS证书

组件  使用的证书
etcd  ca.pem,server.pem,server-key.pem
flannel  ca.pem,server.pem,server-key.pem
kube-apiserver  ca.pem,server.pem,server-key.pem
kubelet  ca.pem,ca-key.pem
kube-proxy  ca.pem,kube-proxy.pem,kube-proxy-key.pem
kubectl  ca.pem,admin.pem,admin-key.pem

在master安装证书生成工具 cfssl :

  1. mkdir ssl;cd ssl
  2. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  3. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 --no-check-certificate
  4. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
  5. chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
  6. mv cfssl_linux-amd64 /usr/local/bin/cfssl
  7. mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
  8. mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

执行certificate.sh生成证书

  1. [root@master ssl]# cat certificate.sh
  2.  
  3. cat > ca-config.json <<EOF
  4. {
  5. "signing": {
  6. "default": {
  7. "expiry": "87600h"
  8. },
  9. "profiles": {
  10. "kubernetes": {
  11. "expiry": "87600h",
  12. "usages": [
  13. "signing",
  14. "key encipherment",
  15. "server auth",
  16. "client auth"
  17. ]
  18. }
  19. }
  20. }
  21. }
  22. EOF
  23.  
  24. cat > ca-csr.json <<EOF
  25. {
  26. "CN": "kubernetes",
  27. "key": {
  28. "algo": "rsa",
  29. "size": 2048
  30. },
  31. "names": [
  32. {
  33. "C": "CN",
  34. "L": "Beijing",
  35. "ST": "Beijing",
  36. "O": "k8s",
  37. "OU": "System"
  38. }
  39. ]
  40. }
  41. EOF
  42.  
  43. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  44.  
  45. #-----------------------
  46.  
  47. cat > server-csr.json <<EOF
  48. {
  49. "CN": "kubernetes",
  50. "hosts": [
  51. "127.0.0.1",
  52. "192.168.247.211",
  53. "192.168.247.212",
  54. "192.168.247.213",
  55. "10.10.10.1",
  56. "kubernetes",
  57. "kubernetes.default",
  58. "kubernetes.default.svc",
  59. "kubernetes.default.svc.cluster",
  60. "kubernetes.default.svc.cluster.local"
  61. ],
  62. "key": {
  63. "algo": "rsa",
  64. "size": 2048
  65. },
  66. "names": [
  67. {
  68. "C": "CN",
  69. "L": "BeiJing",
  70. "ST": "BeiJing",
  71. "O": "k8s",
  72. "OU": "System"
  73. }
  74. ]
  75. }
  76. EOF
  77.  
  78. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  79.  
  80. #-----------------------
  81.  
  82. cat > admin-csr.json <<EOF
  83. {
  84. "CN": "admin",
  85. "hosts": [],
  86. "key": {
  87. "algo": "rsa",
  88. "size": 2048
  89. },
  90. "names": [
  91. {
  92. "C": "CN",
  93. "L": "BeiJing",
  94. "ST": "BeiJing",
  95. "O": "system:masters",
  96. "OU": "System"
  97. }
  98. ]
  99. }
  100. EOF
  101.  
  102. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
  103.  
  104. #-----------------------
  105.  
  106. cat > kube-proxy-csr.json <<EOF
  107. {
  108. "CN": "system:kube-proxy",
  109. "hosts": [],
  110. "key": {
  111. "algo": "rsa",
  112. "size": 2048
  113. },
  114. "names": [
  115. {
  116. "C": "CN",
  117. "L": "BeiJing",
  118. "ST": "BeiJing",
  119. "O": "k8s",
  120. "OU": "System"
  121. }
  122. ]
  123. }
  124. EOF
  125.  
  126. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

注意这里先把ssl这个目录拷贝一份,因为后面RBAC授权的时候还需要运用到这些生成的证书!!

然后执行以下命令只留下pem证书

ls |grep -v "pem"|xargs rm -fr

集群部署 – 部署Etcd集群

etcd是一个高可用的键值存储系统,主要用于共享配置和服务发现。etcd是由CoreOS开发并维护的,灵感来自于 ZooKeeper 和 Doozer,它使用Go语言编写,并通过Raft一致性算法处理日志复制以保证强一致性。Raft是一个新的一致性算法,适用于分布式系统的日志复制,Raft通过选举的方式来实现一致性。Google的容器集群管理系统Kubernetes、开源PaaS平台Cloud Foundry和CoreOS的Fleet都广泛使用了etcd。在分布式系统中,如何管理节点间的状态一直是一个难题,etcd像是专门为集群环境的服务发现和注册而设计,它提供了数据TTL失效、数据改变监视、多值、目录监听、分布式锁原子操作等功能,可以方便的跟踪并管理集群节点的状态。

etcd的特性如下:

  • 简单: 支持curl方式的用户API(HTTP+JSON)
  • 安全: 可选的SSL客户端证书认证
  • 快速: 单实例每秒 1000 次写操作
  • 可靠: 使用Raft保证一致性

二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.2.12
部署(master,node01,node02)

  1. mkdir -p /opt/kubernetes/{bin,cfg,ssl}
  2. [root@master ~]# tar -xf etcd-v3.2.12-linux-amd64.tar.gz
  3. [root@master ~]# mv etcd-v3.2.12-linux-amd64/etcd /opt/kubernetes/bin/
  4. [root@master ~]# mv etcd-v3.2.12-linux-amd64/etcdctl /opt/kubernetes/bin/
  5.  
  6. [root@master ~]# cat /opt/kubernetes/cfg/etcd
  7. #[Member]
  8. ETCD_NAME="etcd01"
  9. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  10. ETCD_LISTEN_PEER_URLS="https://192.168.247.211:2380"
  11. ETCD_LISTEN_CLIENT_URLS="https://192.168.247.211:2379"
  12.  
  13. #[Clustering]
  14. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.211:2380"
  15. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.211:2379"
  16. ETCD_INITIAL_CLUSTER="etcd01=https://192.168.247.211:2380,etcd02=https://192.168.247.212:2380,etcd03=https://192.168.247.213:2380"
  17. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  18. ETCD_INITIAL_CLUSTER_STATE="new"
  19.  
  20. [root@master ~]# cat /usr/lib/systemd/system/etcd.service
  21. [Unit]
  22. Description=Etcd Server
  23. After=network.target
  24. After=network-online.target
  25. Wants=network-online.target
  26.  
  27. [Service]
  28. Type=notify
  29. EnvironmentFile=-/opt/kubernetes/cfg/etcd
  30. ExecStart=/opt/kubernetes/bin/etcd \
  31. --name=${ETCD_NAME} \
  32. --data-dir=${ETCD_DATA_DIR} \
  33. --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
  34. --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  35. --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
  36. --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  37. --initial-cluster=${ETCD_INITIAL_CLUSTER} \
  38. --initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
  39. --initial-cluster-state=new \
  40. --cert-file=/opt/kubernetes/ssl/server.pem \
  41. --key-file=/opt/kubernetes/ssl/server-key.pem \
  42. --peer-cert-file=/opt/kubernetes/ssl/server.pem \
  43. --peer-key-file=/opt/kubernetes/ssl/server-key.pem \
  44. --trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
  45. --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
  46. Restart=on-failure
  47. LimitNOFILE=65536
  48.  
  49. [Install]
  50. WantedBy=multi-user.target
  51.  
  52. [root@master ~]# cp ssl/server*pem ssl/ca*.pem /opt/kubernetes/ssl/
  53. #制作免密登录
  54. ssh-keygen
  55. ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.247.212
  56. ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.247.213
  57.  
  58. [root@master ~]# scp -r /opt/kubernetes/ 192.168.247.212:/opt/
  59. [root@master ~]# scp -r /opt/kubernetes/ 192.168.247.213:/opt/
  60. [root@master ~]# scp -r /usr/lib/systemd/system/etcd.service 192.168.247.212:/usr/lib/systemd/system/
  61. [root@master ~]# scp -r /usr/lib/systemd/system/etcd.service 192.168.247.213:/usr/lib/systemd/system/
  62. [root@master ~]# systemctl start etcd && systemctl enable etcd
  63.  
  64. 修改node1node2的/opt/kubernetes/cfg/etcd文件里的ETCD_NAME参数。然后启动!

etcd配置文件参数说明:

  • ETCD_NAME 节点名称

  • ETCD_DATA_DIR 数据目录

  • ETCD_LISTEN_PEER_URLS 集群通信监听地址

  • ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址

  • ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址

  • ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址

  • ETCD_INITIAL_CLUSTER 集群节点地址

  • ETCD_INITIAL_CLUSTER_TOKEN 集群Token

  • ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

查看集群状态:

  1. # /opt/kubernetes/bin/etcdctl \
  2. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  3. --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \
  4. cluster-health
  5.  
  6. [root@master ssl]# /opt/kubernetes/bin/etcdctl \
  7. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  8. --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \
  9. cluster-health
  10. member a6c341768b1e58b is healthy: got healthy result from https://192.168.247.211:2379
  11. member 62b5a3c1db53387a is healthy: got healthy result from https://192.168.247.212:2379
  12. member d0f8841f2d3e2788 is healthy: got healthy result from https://192.168.247.213:2379

集群部署 – 部署Flannel网络

Overlay Network :覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
VXLAN :将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
Flannel :是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。
多主机容器网络通信其他主流方案:隧道方案( Weave、OpenvSwitch ),路由方案(Calico)等。


集群部署 – 部署Flannel网络(node01,node02)

1 )写入分配的子网段到 etcd ,供 flanneld 使用
1)首先设置子网

  1. [root@master ssl]# /opt/kubernetes/bin/etcdctl \
  2. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  3. --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \
  4. set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
  5. { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

2 )下载二进制包

  1. # wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
  2. tar -xf flannel-v0.9.1-linux-amd64.tar.gz
  3. scp flanneld mk-docker-opts.sh 192.168.247.212:/opt/kubernetes/bin/
  4. scp flanneld mk-docker-opts.sh 192.168.247.213:/opt/kubernetes/bin/

3 )配置 Flannel

  1. [root@node01 cfg]# pwd
  2. /opt/kubernetes/cfg
  3. [root@node01 cfg]# cat flanneld
  4. FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

4 ) systemd 管理 Flannel

  1. [root@node01 cfg]# cat /usr/lib/systemd/system/flanneld.service
  2. [Unit]
  3. Description=Flanneld overlay address etcd agent
  4. After=network-online.target network.target
  5. Before=docker.service
  6.  
  7. [Service]
  8. Type=notify
  9. EnvironmentFile=/opt/kubernetes/cfg/flanneld
  10. ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
  11. ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
  12. Restart=on-failure
  13.  
  14. [Install]
  15. WantedBy=multi-user.target
  16. 5 )配置 Docker 启动指定子网段
  17. [root@node01 cfg]# cat /usr/lib/systemd/system/docker.service
  18.  
  19. [Unit]
  20. Description=Docker Application Container Engine
  21. Documentation=https://docs.docker.com
  22. After=network-online.target firewalld.service
  23. Wants=network-online.target
  24.  
  25. [Service]
  26. Type=notify
  27. EnvironmentFile=/run/flannel/subnet.env
  28. ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
  29. ExecReload=/bin/kill -s HUP $MAINPID
  30. LimitNOFILE=infinity
  31. LimitNPROC=infinity
  32. LimitCORE=infinity
  33. TimeoutStartSec=0
  34. Delegate=yes
  35. KillMode=process
  36. Restart=on-failure
  37. StartLimitBurst=3
  38. StartLimitInterval=60s
  39.  
  40. [Install]
  41. WantedBy=multi-user.target

6 ) 启动(一定要按这个顺序)

  1. [root@node01 cfg]# systemctl daemon-reload
  2. [root@node01 cfg]# systemctl restart flanneld && systemctl enable flanneld
  3. [root@node01 cfg]# systemctl restart docker

同步到其他node后启动

  1. cd /opt/kubernetes/cfg/
  2. scp flanneld 192.168.247.212:/opt/kubernetes/cfg/
  3. scp flanneld 192.168.247.213:/opt/kubernetes/cfg/
  4. scp /usr/lib/systemd/system/flanneld.service 192.168.247.212:/usr/lib/systemd/system/
  5. scp /usr/lib/systemd/system/flanneld.service 192.168.247.213:/usr/lib/systemd/system/
  6. scp /usr/lib/systemd/system/docker.service 192.168.247.213:/usr/lib/systemd/system/
  7. scp /usr/lib/systemd/system/docker.service 192.168.247.212:/usr/lib/systemd/system/

7 测试
#列出集群中的所有子网

  1. [root@master ssl]# /opt/kubernetes/bin/etcdctl \
  2. > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  3. > --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \
  4. > ls /coreos.com/network/subnets
  5.  
  6. /coreos.com/network/subnets/172.17.100.0-24
  7. /coreos.com/network/subnets/172.17.57.0-24
  8. /coreos.com/network/subnets/172.17.88.0-24

#查看子网对应的物理网口

  1. [root@master ssl]# /opt/kubernetes/bin/etcdctl \
  2. > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  3. > --endpoints="https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379" \
  4. > get /coreos.com/network/subnets/172.17.57.0-24
  5. {"PublicIP":"192.168.247.212","BackendType":"vxlan","BackendData":{"VtepMAC":"a6:e3:be:9b:f6:b9"}

我们发现flannel.1和docker0是在同一网段的

#ping 88段的容器

  1. [root@node01 cfg]# ping 172.17.88.1
  2. PING 172.17.88.1 (172.17.88.1) 56(84) bytes of data.
  3. 64 bytes from 172.17.88.1: icmp_seq=1 ttl=64 time=0.581 ms
  4. 64 bytes from 172.17.88.1: icmp_seq=2 ttl=64 time=0.871 ms
  5. 64 bytes from 172.17.88.1: icmp_seq=3 ttl=64 time=6.78 ms
  6. 64 bytes from 172.17.88.1: icmp_seq=4 ttl=64 time=0.874 ms
  7. ^C
  8. --- 172.17.88.1 ping statistics ---
  9. 4 packets transmitted, 4 received, 0% packet loss, time 3011ms
  10. rtt min/avg/max/mdev = 0.581/2.277/6.783/2.604 ms

集群部署 – 创建Node节点kubeconfig文件

1、创建TLS Bootstrapping Token
2、创建kubelet kubeconfig
3、创建kube-proxy kubeconfig

下载安装包:https://dl.k8s.io/v1.11.7/kubernetes-server-linux-amd64.tar.gz

  1. [root@master master_pkg]# tar -xf kubernetes-server-linux-amd64.tar.gz
  2. [root@master master_pkg]# mv kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin
  3. [root@master bin]# pwd
  4. /opt/kubernetes/bin
  5. [root@master bin]# chmod +x kubectl
  6. [root@master bin]# echo "PATH=$PATH:/opt/kubernetes/bin" >>/etc/profile
  7. [root@master bin]# source /etc/profile
  8. [root@master ssl]# pwd
  9. /root/ssl
  10. [root@master ssl]# cat kubeconfig.sh
  11. # 创建 TLS Bootstrapping Token
  12. export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
  13. cat > token.csv <<EOF
  14. ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  15. EOF
  16.  
  17. #----------------------
  18.  
  19. # 创建kubelet bootstrapping kubeconfig
  20. export KUBE_APISERVER="https://192.168.247.211:6443"
  21.  
  22. # 设置集群参数
  23. kubectl config set-cluster kubernetes \
  24. --certificate-authority=./ca.pem \
  25. --embed-certs=true \
  26. --server=${KUBE_APISERVER} \
  27. --kubeconfig=bootstrap.kubeconfig
  28.  
  29. # 设置客户端认证参数
  30. kubectl config set-credentials kubelet-bootstrap \
  31. --token=${BOOTSTRAP_TOKEN} \
  32. --kubeconfig=bootstrap.kubeconfig
  33.  
  34. # 设置上下文参数
  35. kubectl config set-context default \
  36. --cluster=kubernetes \
  37. --user=kubelet-bootstrap \
  38. --kubeconfig=bootstrap.kubeconfig
  39.  
  40. # 设置默认上下文
  41. kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  42.  
  43. #----------------------
  44.  
  45. # 创建kube-proxy kubeconfig文件
  46.  
  47. kubectl config set-cluster kubernetes \
  48. --certificate-authority=./ca.pem \
  49. --embed-certs=true \
  50. --server=${KUBE_APISERVER} \
  51. --kubeconfig=kube-proxy.kubeconfig
  52.  
  53. kubectl config set-credentials kube-proxy \
  54. --client-certificate=./kube-proxy.pem \
  55. --client-key=./kube-proxy-key.pem \
  56. --embed-certs=true \
  57. --kubeconfig=kube-proxy.kubeconfig
  58.  
  59. kubectl config set-context default \
  60. --cluster=kubernetes \
  61. --user=kube-proxy \
  62. --kubeconfig=kube-proxy.kubeconfig
  63.  
  64. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  65. [root@master ssl]# sh kubeconfig.sh
  66. Cluster "kubernetes" set.
  67. User "kubelet-bootstrap" set.
  68. Context "default" created.
  69. Switched to context "default".
  70. Cluster "kubernetes" set.
  71. User "kube-proxy" set.
  72. Context "default" created.
  73. Switched to context "default".
  74. [root@master ssl]# cat token.csv
  75. dc434e4db0f27ac84703bacbb8157540,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  76. [root@master ssl]# cp token.csv /opt/kubernetes/cfg/

集群部署 – 运行Master组件

master3个主件安装脚本:

  1. [root@master master_pkg]# cat apiserver.sh
  2. #!/bin/bash
  3.  
  4. MASTER_ADDRESS=${1:-"192.168.1.195"}
  5. ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}
  6.  
  7. cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
  8.  
  9. KUBE_APISERVER_OPTS="--logtostderr=true \\
  10. --v=4 \\
  11. --etcd-servers=${ETCD_SERVERS} \\
  12. --insecure-bind-address=127.0.0.1 \\
  13. --bind-address=${MASTER_ADDRESS} \\
  14. --insecure-port=8080 \\
  15. --secure-port=6443 \\
  16. --advertise-address=${MASTER_ADDRESS} \\
  17. --allow-privileged=true \\
  18. --service-cluster-ip-range=10.10.10.0/24 \\
  19. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
  20. --authorization-mode=RBAC,Node \\
  21. --kubelet-https=true \\
  22. --enable-bootstrap-token-auth \\
  23. --token-auth-file=/opt/kubernetes/cfg/token.csv \\
  24. --service-node-port-range=30000-50000 \\
  25. --tls-cert-file=/opt/kubernetes/ssl/server.pem \\
  26. --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
  27. --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
  28. --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  29. --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
  30. --etcd-certfile=/opt/kubernetes/ssl/server.pem \\
  31. --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
  32.  
  33. EOF
  34.  
  35. cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
  36. [Unit]
  37. Description=Kubernetes API Server
  38. Documentation=https://github.com/kubernetes/kubernetes
  39.  
  40. [Service]
  41. EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
  42. ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
  43. Restart=on-failure
  44.  
  45. [Install]
  46. WantedBy=multi-user.target
  47. EOF
  48.  
  49. systemctl daemon-reload
  50. systemctl enable kube-apiserver
  51. systemctl restart kube-apiserver
  52.  
  53. [root@master master_pkg]# cat controller-manager.sh
  54. #!/bin/bash
  55.  
  56. MASTER_ADDRESS=${1:-"127.0.0.1"}
  57.  
  58. cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
  59.  
  60. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
  61. --v=4 \\
  62. --master=${MASTER_ADDRESS}:8080 \\
  63. --leader-elect=true \\
  64. --address=127.0.0.1 \\
  65. --service-cluster-ip-range=10.10.10.0/24 \\
  66. --cluster-name=kubernetes \\
  67. --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
  68. --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  69. --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  70. --root-ca-file=/opt/kubernetes/ssl/ca.pem"
  71.  
  72. EOF
  73.  
  74. cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
  75. [Unit]
  76. Description=Kubernetes Controller Manager
  77. Documentation=https://github.com/kubernetes/kubernetes
  78.  
  79. [Service]
  80. EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
  81. ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
  82. Restart=on-failure
  83.  
  84. [Install]
  85. WantedBy=multi-user.target
  86. EOF
  87.  
  88. systemctl daemon-reload
  89. systemctl enable kube-controller-manager
  90. systemctl restart kube-controller-manager
  91.  
  92. [root@master master_pkg]# cat scheduler.sh
  93. #!/bin/bash
  94.  
  95. MASTER_ADDRESS=${1:-"127.0.0.1"}
  96.  
  97. cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
  98.  
  99. KUBE_SCHEDULER_OPTS="--logtostderr=true \\
  100. --v=4 \\
  101. --master=${MASTER_ADDRESS}:8080 \\
  102. --leader-elect"
  103.  
  104. EOF
  105.  
  106. cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
  107. [Unit]
  108. Description=Kubernetes Scheduler
  109. Documentation=https://github.com/kubernetes/kubernetes
  110.  
  111. [Service]
  112. EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
  113. ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
  114. Restart=on-failure
  115.  
  116. [Install]
  117. WantedBy=multi-user.target
  118. EOF
  119.  
  120. systemctl daemon-reload
  121. systemctl enable kube-scheduler
  122. systemctl restart kube-scheduler

apiserver配置文件

参数说明:

  • —logtostderr 启用日志

  • —-v  日志等级

  • —etcd-servers etcd集群地址

  • —bind-address 监听地址

  • —secure-port https安全端口

  • —advertise-address 集群通告地址

  • —allow-privileged 启用授权

  • —service-cluster-ip-range Service虚拟IP地址段

  • —enable-admission-plugins 准入控制模块

  • —authorization-mode 认证授权,启用RBAC授权和节点自管理

  • —enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到

  • —token-auth-file  token文件

  • —service-node-port-range Service Node类型默认分配端口范围

部署master

  1. [root@master ~]# cp ssl/ca*pem ssl/server*pem /opt/kubernetes/ssl/
  2. [root@master master_pkg]# chmod +x /opt/kubernetes/bin/* && chmod +x *.sh
  3. [root@master master_pkg]# ./apiserver.sh 192.168.247.211 https://192.168.247.211:2379,https://192.168.247.212:2379,https://192.168.247.213:2379
  4. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
  5. [root@master master_pkg]# ./scheduler.sh 127.0.0.1
  6. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
  7. [root@master master_pkg]# ./controller-manager.sh 127.0.0.1
  8. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
  9. [root@master master_pkg]# echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
  10. [root@master master_pkg]# source /etc/profile

集群部署 – 运行Node组件(node01,node02)

1、将master上的node配置文件拷贝到node的/opt/kubernetes/cfg/目录下

  1. [root@master ssl]# scp *kubeconfig 192.168.247.212:/opt/kubernetes/cfg/
  2. [root@node01 ~]#tar -xf kubernetes-server-linux-amd64.tar.gz
  3. [root@node01 ~]# mv kubelet kube-proxy /opt/kubernetes/bin

2、node上2个组件的安装脚本

  1. [root@node01 ~]# cat kubelet.sh
  2. #!/bin/bash
  3.  
  4. NODE_ADDRESS=${1:-"192.168.1.196"}
  5. DNS_SERVER_IP=${2:-"10.10.10.2"}
  6.  
  7. cat <<EOF >/opt/kubernetes/cfg/kubelet
  8.  
  9. KUBELET_OPTS="--logtostderr=true \\
  10. --v=4 \\
  11. --address=${NODE_ADDRESS} \\
  12. --hostname-override=${NODE_ADDRESS} \\
  13. --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
  14. --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
  15. --cert-dir=/opt/kubernetes/ssl \\
  16. --allow-privileged=true \\
  17. --cluster-dns=${DNS_SERVER_IP} \\
  18. --cluster-domain=cluster.local \\
  19. --fail-swap-on=false \\
  20. --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
  21.  
  22. EOF
  23.  
  24. cat <<EOF >/usr/lib/systemd/system/kubelet.service
  25. [Unit]
  26. Description=Kubernetes Kubelet
  27. After=docker.service
  28. Requires=docker.service
  29.  
  30. [Service]
  31. EnvironmentFile=-/opt/kubernetes/cfg/kubelet
  32. ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
  33. Restart=on-failure
  34. KillMode=process
  35.  
  36. [Install]
  37. WantedBy=multi-user.target
  38. EOF
  39.  
  40. systemctl daemon-reload
  41. systemctl enable kubelet
  42. systemctl restart kubelet
  43.  
  44. [root@node01 ~]# cat proxy.sh
  45. #!/bin/bash
  46.  
  47. NODE_ADDRESS=${1:-"192.168.1.200"}
  48.  
  49. cat <<EOF >/opt/kubernetes/cfg/kube-proxy
  50.  
  51. KUBE_PROXY_OPTS="--logtostderr=true \
  52. --v=4 \
  53. --hostname-override=${NODE_ADDRESS} \
  54. --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  55.  
  56. EOF
  57.  
  58. cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
  59. [Unit]
  60. Description=Kubernetes Proxy
  61. After=network.target
  62.  
  63. [Service]
  64. EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
  65. ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
  66. Restart=on-failure
  67.  
  68. [Install]
  69. WantedBy=multi-user.target
  70. EOF
  71.  
  72. systemctl daemon-reload
  73. systemctl enable kube-proxy
  74. systemctl restart kube-proxy

kubelet配置文件

参数说明:

  • —hostname-override 在集群中显示的主机名

  • —kubeconfig 指定kubeconfig文件位置,会自动生成

  • —bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件

  • —cert-dir 颁发证书存放位置

  • —pod-infra-container-image 管理Pod网络的镜像

3、部署node

  1. [root@node01 ~]# chmod +x /opt/kubernetes/bin/* && chmod +x *.sh
  2. [root@node01 ~]# ./kubelet.sh 192.168.247.212 10.10.10.2
  3. Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
  4. [root@node01 ~]# ./proxy.sh 192.168.247.212
  5. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

4、在master上绑定kubelet-bootstrap

  1. [root@master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  2. clusterrolebinding "kubelet-bootstrap" created
  3. [root@node01 cfg]# systemctl start kubelet && systemctl enable kubelet
  4. [root@node01 cfg]# systemctl start kube-proxy && systemctl enable kube-proxy
  5. [root@master ssl]# kubectl get csr
  6. NAME AGE REQUESTOR CONDITION
  7. node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54 26s kubelet-bootstrap Pending
  8.  
  9. [root@master ssl]# kubectl certificate approve node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54
  10. certificatesigningrequest "node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54" approved
  11. [root@master ssl]# kubectl get csr
  12. NAME AGE REQUESTOR CONDITION
  13. node-csr-atAc1doj0IP5p48t-yz8FphTOxJYILpu_I9RY5ejL54 1m kubelet-bootstrap Approved,Issued

集群部署 – 查询集群状态

# kubectl get node


# kubectl get componentstatus

Kubernetes容器集群管理

集群部署 – 启动一个测试示例

  1. # kubectl run nginx --image=nginx --replicas=3
  2. # kubectl get pod
  3. # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
  4. # kubectl get svc nginx
  5. Kubernetes容器集群管理

集群部署 – 部署Web UI (Dashboard)

Dashboard脚本:

  1. [root@master k8s_yaml]# cat kubernetes-dashboard.yaml
  2. # Copyright 2017 The Kubernetes Authors.
  3. #
  4. # Licensed under the Apache License, Version 2.0 (the "License");
  5. # you may not use this file except in compliance with the License.
  6. # You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15.  
  16. # ------------------- Dashboard Secret ------------------- #
  17.  
  18. apiVersion: v1
  19. kind: Secret
  20. metadata:
  21. labels:
  22. k8s-app: kubernetes-dashboard
  23. name: kubernetes-dashboard-certs
  24. namespace: kube-system
  25. type: Opaque
  26.  
  27. ---
  28. # ------------------- Dashboard Service Account ------------------- #
  29.  
  30. apiVersion: v1
  31. kind: ServiceAccount
  32. metadata:
  33. labels:
  34. k8s-app: kubernetes-dashboard
  35. name: kubernetes-dashboard
  36. namespace: kube-system
  37.  
  38. ---
  39. # ------------------- Dashboard Role & Role Binding ------------------- #
  40.  
  41. kind: Role
  42. apiVersion: rbac.authorization.k8s.io/v1
  43. metadata:
  44. name: kubernetes-dashboard-minimal
  45. namespace: kube-system
  46. rules:
  47. # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
  48. - apiGroups: [""]
  49. resources: ["secrets"]
  50. verbs: ["create"]
  51. # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
  52. - apiGroups: [""]
  53. resources: ["configmaps"]
  54. verbs: ["create"]
  55. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  56. - apiGroups: [""]
  57. resources: ["secrets"]
  58. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  59. verbs: ["get", "update", "delete"]
  60. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  61. - apiGroups: [""]
  62. resources: ["configmaps"]
  63. resourceNames: ["kubernetes-dashboard-settings"]
  64. verbs: ["get", "update"]
  65. # Allow Dashboard to get metrics from heapster.
  66. - apiGroups: [""]
  67. resources: ["services"]
  68. resourceNames: ["heapster"]
  69. verbs: ["proxy"]
  70. - apiGroups: [""]
  71. resources: ["services/proxy"]
  72. resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  73. verbs: ["get"]
  74.  
  75. ---
  76. apiVersion: rbac.authorization.k8s.io/v1
  77. kind: RoleBinding
  78. metadata:
  79. name: kubernetes-dashboard-minimal
  80. namespace: kube-system
  81. roleRef:
  82. apiGroup: rbac.authorization.k8s.io
  83. kind: Role
  84. name: kubernetes-dashboard-minimal
  85. subjects:
  86. - kind: ServiceAccount
  87. name: kubernetes-dashboard
  88. namespace: kube-system
  89.  
  90. ---
  91. # ------------------- Dashboard Deployment ------------------- #
  92.  
  93. kind: Deployment
  94. apiVersion: apps/v1
  95. metadata:
  96. labels:
  97. k8s-app: kubernetes-dashboard
  98. name: kubernetes-dashboard
  99. namespace: kube-system
  100. spec:
  101. replicas: 1
  102. revisionHistoryLimit: 10
  103. selector:
  104. matchLabels:
  105. k8s-app: kubernetes-dashboard
  106. template:
  107. metadata:
  108. labels:
  109. k8s-app: kubernetes-dashboard
  110. spec:
  111. containers:
  112. - name: kubernetes-dashboard
  113. image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
  114. ports:
  115. - containerPort: 8443
  116. protocol: TCP
  117. args:
  118. - --auto-generate-certificates
  119. # Uncomment the following line to manually specify Kubernetes API server Host
  120. # If not specified, Dashboard will attempt to auto discover the API server and connect
  121. # to it. Uncomment only if the default does not work.
  122. # - --apiserver-host=http://my-address:port
  123. volumeMounts:
  124. - name: kubernetes-dashboard-certs
  125. mountPath: /certs
  126. # Create on-disk volume to store exec logs
  127. - mountPath: /tmp
  128. name: tmp-volume
  129. livenessProbe:
  130. httpGet:
  131. scheme: HTTPS
  132. path: /
  133. port: 8443
  134. initialDelaySeconds: 30
  135. timeoutSeconds: 30
  136. volumes:
  137. - name: kubernetes-dashboard-certs
  138. secret:
  139. secretName: kubernetes-dashboard-certs
  140. - name: tmp-volume
  141. emptyDir: {}
  142. serviceAccountName: kubernetes-dashboard
  143. # Comment the following tolerations if Dashboard must not be deployed on master
  144. tolerations:
  145. - key: node-role.kubernetes.io/master
  146. effect: NoSchedule
  147.  
  148. ---
  149. # ------------------- Dashboard Service ------------------- #
  150.  
  151. kind: Service
  152. apiVersion: v1
  153. metadata:
  154. labels:
  155. k8s-app: kubernetes-dashboard
  156. name: kubernetes-dashboard
  157. namespace: kube-system
  158. spec:
  159. type: NodePort
  160. ports:
  161. - port: 443
  162. targetPort: 8443
  163. nodePort: 30001
  164. selector:
  165. k8s-app: kubernetes-dashboard
  166.  
  167. [root@master k8s_yaml]# cat dashboard-admin.yaml
  168. kind: ClusterRoleBinding
  169. apiVersion: rbac.authorization.k8s.io/v1beta1
  170. metadata:
  171. name: admin
  172. annotations:
  173. rbac.authorization.kubernetes.io/autoupdate: "true"
  174. roleRef:
  175. kind: ClusterRole
  176. name: cluster-admin
  177. apiGroup: rbac.authorization.k8s.io
  178. subjects:
  179. - kind: ServiceAccount
  180. name: admin
  181. namespace: kube-system
  182. ---
  183. apiVersion: v1
  184. kind: ServiceAccount
  185. metadata:
  186. name: admin
  187. namespace: kube-system
  188. labels:
  189. kubernetes.io/cluster-service: "true"
  190. addonmanager.kubernetes.io/mode: Reconcile

安装dashboard,https://192.168.247.212:30001/#!访问然后跳过认证即可!!

  1. [root@master k8s_yaml]# kubectl apply -f kubernetes-dashboard.yaml
  2. [root@master k8s_yaml]# kubectl apply -f dashboard-admin.yaml

或者通过token访问:

  1. kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token

注意这里有个坑,复制的时候格式会换行需要放到记事本里取消换行!!!

部署中的脚本下载地址:https://github.com/hejianlai/Docker-Kubernetes/tree/master/Kubernetes/install

来了,老弟!__二进制部署kubernetes1.11.7集群的更多相关文章

  1. 最新二进制安装部署kubernetes1.15.6集群---超详细教程

    00.组件版本和配置策略 00-01.组件版本 Kubernetes 1.15.6 Docker docker-ce-18.06.1.ce-3.el7 Etcd v3.3.13 Flanneld v0 ...

  2. 在CentOS上部署kubernetes1.9.0集群

    原文链接: https://jimmysong.io/kubernetes-handbook/cloud-native/play-with-kubernetes.html (在CentOS上部署kub ...

  3. Kubernetes 二进制部署(二)集群部署(多 Master 节点通过 Nginx 负载均衡)

    0. 前言 紧接上一篇,本篇文章我们尝试学习多节点部署 kubernetes 集群 并通过 haproxy+keepalived 实现 Master 节点的负载均衡 1. 实验环境 实验环境主要为 5 ...

  4. 二进制部署Kubernetes-v1.14.1集群

    一.部署Kubernetes集群 1.1 Kubernetes介绍 Kubernetes(K8S)是Google开源的容器集群管理系统,K8S在Docker容器技术的基础之上,大大地提高了容器化部署应 ...

  5. K8s二进制部署单节点 etcd集群,flannel网络配置 ——锥刺股

    K8s 二进制部署单节点 master    --锥刺股 k8s集群搭建: etcd集群 flannel网络插件 搭建master组件 搭建node组件 1.部署etcd集群 2.Flannel 网络 ...

  6. Kubeadm部署-Kubernetes-1.18.6集群

    环境配置 IP hostname 操作系统 10.11.66.44 k8s-master centos7.6 10.11.66.27 k8s-node1 centos7.7 10.11.66.28 k ...

  7. 使用Kubeadm部署Kubernetes1.14.1集群

    一.环境说明 主机名 IP地址 角色 系统 k8s-node-1 192.170.38.80 k8s-master Centos7.6 k8s-node-2 192.170.38.81 k8s-nod ...

  8. Centos7离线部署kubernetes 1.13集群记录

    一.说明 本篇主要参考kubernetes中文社区的一篇部署文章(CentOS 使用二进制部署 Kubernetes 1.13集群),并做了更详细的记录以备用. 二.部署环境 1.kubernetes ...

  9. Solr 11 - Solr集群模式的部署(基于Solr 4.10.4搭建SolrCloud)

    目录 1 SolrCloud结构说明 2 环境的安装 2.1 环境说明 2.2 部署并启动ZooKeeper集群 2.3 部署Solr单机服务 2.4 添加Solr的索引库 3 部署Solr集群服务( ...

随机推荐

  1. log4j、使用log4j、打印sql日志

    添加pom文件依赖 <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifa ...

  2. 2018-2019-2 网络对抗技术 20162329 Exp3 免杀原理与实践

    目录 免杀原理与实践 一.基础问题回答 1.杀软是如何检测出恶意代码的? 2.免杀是做什么? 3.免杀的基本方法有哪些? 二.实验内容 1. 正确使用msf编码器 2. msfvenom生成如jar之 ...

  3. tensorflow 使用 5 mnist 数据集, softmax 函数

    用于分类  softmax 函数 手写数据识别:

  4. vue-nuxtjs

    1.创建项目:npm create-nuxt-app projectName 2.npm i sass-loader node-sass

  5. System.getProperty(String key)方法获取常用系统信息

    其中key可以为以下选项: 1.java.version Java 运行时环境版本 2.java.vendor Java 运行时环境供应商 3.java.vendor.url Java 供应商的 UR ...

  6. linux centos环境下,perl使用DBD::Oracle遇到报错Can't locate DBD/Oracle.pm in @INC 的解决办法

    前言 接手前辈的项目,没有接触.安装.使用过perl和DBD::Oracle,也没有相关的文档记录,茫茫然不知所措~~.一开始发现这个问题,就想着迅速解决,就直接在google上搜报错信息,搜索的过程 ...

  7. PHP序列号生成函数和字符串替换函数代码

    /** * 序列号生成器 */ function snMaker($pre = '') { $date = date('Ymd'); $rand = rand(1000000,9999999); $t ...

  8. 将Redhat,CentOS,Ubuntu虚拟机的IP设为静态IP的方法

    一般在主机上创建的虚拟机默认是通过DHCP(Dynamic Host Configuration Protocol,动态主机配置协议)网络协议来动态生成的,这样会导致你安装的虚拟机的IP地址是动态变化 ...

  9. HTML/CSS实现的一个列表页

    又到休息日,白天没事跟朋友去逛逛街,侃大山,晚上了,上网无趣,于是就想起该练练了, 这次是做了一个页面,最上面是一个banner 用到了一个jQuery的逻辑判断当banner初始top值小于wind ...

  10. zstd --压缩工具

    Zstandard (也被称为zstd )是一款免费的开源,快速实时数据压缩程序,具有更好的压缩比 (约为 10:1). 安装 yum group install "Development ...