kubernetes云平台管理实战: 集群部署(一)
一、环境规划
1、架构拓扑图
2、主机规划
3、软件版本
最新版本详见:https://www.cnblogs.com/luoahong/p/12917582.html
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root@k8s-master ~]# uname -r
3.10.0-693.el7.x86_64 [root@k8s-master ~]# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64
Go version: go1.8.3
Git commit: ec8512b/1.12.6
Built: Mon Dec 11 16:08:42 2017
OS/Arch: linux/amd64 Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64
Go version: go1.8.3
Git commit: ec8512b/1.12.6
Built: Mon Dec 11 16:08:42 2017
OS/Arch: linux/amd64 [root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} [root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@k8s-master ~]# kubectl get node
NAME STATUS AGE
k8s-node1 Ready 1d
k8s-node2 Ready 1d
4、修改主机和host解析
1、修改主机
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
2、hosts解析
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
k8s-master 10.0.128.0
k8s-node1 10.0.128.1
k8s-node2 10.0.128.2 scp -rp /etc/hosts 10.0.128.1:/etc/hosts
scp -rp /etc/hosts 10.0.128.2:/etc/hosts
二、所有节点安装docker-1.12.6-68
1、下载rpm包到本地
http://vault.centos.org/7.4.1708/extras/x86_64/Packages/
2、安装
yum localinstall docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y
yum localinstall docker-client-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y
yum localinstall docker-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y
3、启动
[root@k8s-master ~]# systemctl enable docker.service
[root@k8s-node1 ~]# systemctl enable docker.service
[root@k8s-node2 ~]# systemctl enable docker.service
4、遇到的坑
1、docker-common,docker-client,docker-1.12.6-68先后顺序一定不能乱,负责会安装失败
2、我安装的时候就把docker-client和docker-common的顺序搞反,耽误了半天时间
三、master节点安装etcd
k8s数据库kv类型存储,原生支持做集群
1、安装
yum install etcd.x86_64 -y
2、配置
vim /etc/etcd/etcd.conf
修改一下两行:
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.128.0:2379"
3、启动
systemctl start etcd.service
systemctl enable etcd.service
4、健康检查
[root@k8s-master ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 1396/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1162/sshd
tcp6 0 0 :::2379 :::* LISTEN 1396/etcd
tcp6 0 0 :::22 :::* LISTEN 1162/sshd
udp 0 0 0.0.0.0:30430 0.0.0.0:* 1106/dhclient
udp 0 0 0.0.0.0:68 0.0.0.0:* 1106/dhclient
udp 0 0 127.0.0.1:323 0.0.0.0:* 869/chronyd
udp6 0 0 :::42997 :::* 1106/dhclient
udp6 0 0 ::1:323 :::* 869/chronyd
[root@k8s-master ~]# etcdctl set tesstdir/testkey0 0
0
[root@k8s-master ~]# etcdctl get tesstdir/testkey0
0
[root@k8s-master ~]# etcdctl -C http://10.0.128.0:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.128.0:2379
cluster is healthy
[root@k8s-master ~]# systemctl stop etcd.service
[root@k8s-master ~]# etcdctl -C http://10.0.128.0:2379 cluster-health
cluster may be unhealthy: failed to list members
Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.0.128.0:2379: getsockopt: connection refused
error #0: dial tcp 10.0.128.0:2379: getsockopt: connection refused
[root@k8s-master ~]# systemctl start etcd.service
[root@k8s-master ~]# etcdctl -C http://10.0.128.0:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.128.0:2379
cluster is healthy
四、master节点安装kubernetes,k8s
1、安装
[root@k8s-master ~]# yum install kubernetes-master.x86_64 -y
2、配置
[root@k8s-master ~]#vim /etc/kubernetes/apiserver
修改如下:
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.128.0:2379"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
[root@k8s-master ~]#vim /etc/kubernetes/config
修改如下:
KUBE_MASTER="--master=http://10.0.128.0:8080"
3、启动
systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service
4、组件作用
api-server: 接受并响应用户的请求
controller: 控制器管理,保证容器始终存活
scheduler: 调度器,选择启动容器的node节点
五、node节点安装kubernetes
1、安装
[root@k8s-node1 ~]#yum install kubernetes-node.x86_64 -y
2、配置
[root@k8s-node1 ~]# vim /etc/kubernetes/config
修改如下内容 :
KUBE_MASTER="--master=http://10.0.128.0:8080" [root@k8s-node1 ~]# vim /etc/kubernetes/kubelet
修改如下内容:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=k8s-node1"
KUBELET_API_SERVER="--api-servers=http://10.0.128.0:8080"
3、启动
[root@k8s-node1 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node1 ~]# systemctl start kubelet.service
[root@k8s-node1 ~]# systemctl enable kube-proxy.service
[root@k8s-node1 ~]# systemctl start kube-proxy.service
4、组件作用
kubelet 调用docker,管理容器生命周期
nova-compute 调用libvirt,管理虚拟机的生命周期
kube-poxy 提供容器网络访问
六、所有节点配置flannel网络
1、master节点
1、安装
[root@k8s-master ~]# yum install flannel -y
2、配置
[root@k8s-master ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.128.0:2379#g' /etc/sysconfig/flanneld
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network":"172.16.0.0/16" }'
{ "Network":"172.16.0.0/16" }
3、启动
[root@k8s-master ~]# systemctl enable flanneld.service
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-master ~]# systemctl start flanneld.service
4、检查是否安装成功
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service
[root@k8s-master ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 172.16.67.0 netmask 255.255.0.0 destination 172.16.67.0
inet6 fe80::b9bb:f96a:188d:426e prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
2、node节点
1、安装
[root@k8s-node1 ~]# yum install flannel -y
[root@k8s-node2 ~]# yum install flannel -y
2、配置
[root@k8s-node1 ~]#sed -i 's#http://127.0.0.1:2379#http://10.0.128.0:2379#g' /etc/sysconfig/flannel
[root@k8s-node2 ~]#sed -i 's#http://127.0.0.1:2379#http://10.0.128.0:2379#g' /etc/sysconfig/flannel
3、启动
[root@k8s-node1 ~]# systemctl enable flanneld.service
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node1 ~]# systemctl start flanneld.service
[root@k8s-node2 ~]# systemctl enable flanneld.service
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node2 ~]# systemctl start flanneld.service
4、检查是否安装成功
[root@k8s-node1 ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 172.16.10.0 netmask 255.255.0.0 destination 172.16.10.0
inet6 fe80::6fc:c859:c331:833d prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@k8s-node2 ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 172.16.48.0 netmask 255.255.0.0 destination 172.16.48.0
inet6 fe80::2f4c:6385:45ca:34b8 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
3、容器网络互通测试
1、所有节点启动容器并获取容器ip地址
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
k8s-node1 Ready 10h
k8s-node2 NotReady 9h [root@k8s-master ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ...
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:30:02
inet addr:172.16.48.2 Bcast:0.0.0.0 Mask:255.255.255.0 [root@k8s-node1 ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ...
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:0A:02
inet addr:172.16.10.2 Bcast:0.0.0.0 Mask:255.255.255.0 [root@k8s-node2 ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ...
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:30:02
inet addr:172.16.48.2 Bcast:0.0.0.0 Mask:255.255.255.0
2、网络互通性测试
[root@k8s-node2 ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ...
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:30:02
inet addr:172.16.48.2 Bcast:0.0.0.0 Mask:255.255.255.0 / # ping 172.16.10.2
PING 172.16.10.2 (172.16.10.2): 56 data bytes
64 bytes from 172.16.10.2: seq=0 ttl=60 time=5.212 ms
64 bytes from 172.16.10.2: seq=1 ttl=60 time=1.076 ms
^C
--- 172.16.10.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 1.076/2.118/5.212 ms / # ping 172.16.48.2
PING 172.16.48.2 (172.16.48.2): 56 data bytes
64 bytes from 172.16.48.2: seq=0 ttl=60 time=5.717 ms
64 bytes from 172.16.48.2: seq=1 ttl=60 time=1.108 ms
^C
--- 172.16.48.2 ping statistics ---
7 packets transmitted, 7 packets received, 0% packet loss
round-trip min/avg/max = 1.108/1.904/5.717 ms
4、遇到的坑:
如果用docker-1.31可能会有网络不通的情况,解决办法如下
iptables -P FORWARD ACCEPT
七、配置master为镜像仓库
1、master节点
1、配置
vim /etc/sysconfig/docker 修改内容如下:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.128.0:5000'
systemctl restart docker
2、启动私有仓库
[root@k8s-master ~]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
Unable to find image 'registry:latest' locally
Trying to pull repository docker.io/library/registry ...
latest: Pulling from docker.io/library/registry
cd784148e348: Pull complete
0ecb9b11388e: Pull complete
918b3ddb9613: Pull complete
5aa847785533: Pull complete
adee6f546269: Pull complete
Digest: sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57
ba8d9b958c7c0867d5443ee23825f842a580331c87f6555678709dfc8899ff17
[root@k8s-master ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba8d9b958c7c registry "/entrypoint.sh /etc/" 11 seconds ago Up 9 seconds 0.0.0.0:5000->5000/tcp registry
15340ee09614 busybox "/bin/sh" 3 hours ago Exited (1) 19 minutes ago sleepy_mccarthy
3、推送镜像测试
[root@k8s-master ~]# docker tag docker.io/busybox:latest 10.0.128.0:5000/busybox:latest
[root@k8s-master ~]# docker push 10.0.128.0:5000/busybox:latest
The push refers to a repository [10.0.128.0:5000/busybox]
683f499823be: Pushed
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510 size: 527
2、node节点
1、配置
vim /etc/sysconfig/docker 修改内容如下:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.128.0:5000'
systemctl restart docker
2、测试
[root@k8s-node1 ~]# docker pull 10.0.128.0:5000/busybox:latest
Trying to pull repository 10.0.128.0:5000/busybox ...
latest: Pulling from 10.0.128.0:5000/busybox
Digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510
kubernetes云平台管理实战: 集群部署(一)的更多相关文章
- kubernetes云平台管理实战:deployment通过标签管理pod(十)
一.kubectl run命令拓展 1.RC创建 [root@k8s-master ~]# kubectl run web --generator=run/v1 --image=10.0.128.0: ...
- kubernetes云平台管理实战:HPA水平自动伸缩(十一)
一.自动伸缩 1.启动 [root@k8s-master ~]# kubectl autoscale deployment nginx-deployment --max=8 --min=2 --cpu ...
- kubernetes云平台管理实战:如何创建deployment更好(九)
一.文件创建带--record 1.文件 [root@k8s-master ~]# cat nginx_deploy.yml apiVersion: extensions/v1beta1 kind: ...
- kubernetes云平台管理实战: 滚动升级秒级回滚(六)
一.nginx保证有两个版本 1.查看当前容器运行nginx版本 [root@k8s-master ~]# kubectl get pod -o wide NAME READY STATUS REST ...
- kubernetes云平台管理实战: 自动加载到负载均衡(七)
一.如何实现外界能访问 外界访问不了 1.启动svc [root@k8s-master ~]# cat myweb-svc.yaml apiVersion: v1 kind: Service meta ...
- kubernetes云平台管理实战: 高级资源deployment-滚动升级(八)
一.通过文件创建deployment 1.创建deployment文件 [root@k8s-master ~]# cat nginx_deploy.yml apiVersion: extensions ...
- kubernetes云平台管理实战: 故障自愈实战(四)
一.创建实验文件 [root@k8s-master ~]# cat myweb-rc.yml apiVersion: v1 kind: ReplicationController metadata: ...
- kubernetes云平台管理实战: 最小的资源pod(二)
一.pod初体验 1.编辑k8s_pod.yml文件 [root@k8s-master ~]# cat k8s_pod.yml apiVersion: v1 kind: Pod metadata: n ...
- kubernetes云平台管理实战: 服务发现和负载均衡(五)
一.rc控制器常用命令 1.rc控制器信息查看 [root@k8s-master ~]# kubectl get replicationcontroller NAME DESIRED CURRENT ...
随机推荐
- 数字信号处理专题(1)——DDS函数发生器环路Demo
一.前言 会FPGA硬件描述语言.设计思想和接口协议,掌握些基本的算法是非常重要的,因此开设本专题探讨些基于AD DA数字信号处理系统的一些简单算法,在数字通信 信号分析与检测等领域都会或多或少有应用 ...
- Task.Wait and “Inlining”
“What does Task.Wait do?” Simple question, right? At a high-level, yes, the method achieves what its ...
- windows 与 Centos7 共享文件方法
转自:https://www.cnblogs.com/zejin2008/p/7144514.html 先安装包依赖: yum -y install kernel-devel-$(uname -r) ...
- LeetCode算法题-Longest Uncommon Subsequence I(Java实现)
这是悦乐书的第252次更新,第265篇原创 01 看题和准备 今天介绍的是LeetCode算法题中Easy级别的第119题(顺位题号是521).给定一组两个字符串,您需要找到这组两个字符串中最长的不同 ...
- 事务的ACID属性,图解并发事务带来问题以及事务的隔离级别
事务的概述 事务是指作为单个逻辑工作单元执行的一系列操作,要么完全地执行,要么完全地不执行. 事务处理可以确保除非事务性单元内的所有操作都成功完成,否则不会永久更新面向数据的资源.通过将一组相关操作组 ...
- 使用Python的Mock库进行PySpark单元测试
测试是软件开发中的基础工作,它经常被数据开发者忽视,但是它很重要.在本文中会展示如何使用Python的uniittest.mock库对一段PySpark代码进行测试.笔者会从数据科学家的视角来进行描述 ...
- hashCode()方法对HashMap的性能影响
HashMap的put()方法会比较key的hash值,key的hash值获取方式如下: //HashMap的put方法 public V put(K key, V value) { return p ...
- 慢日志查询python flask sqlalchemy慢日志记录
engine = create_engine(ProdConfig.SQLALCHEMY_DATABASE_URI, echo=True) app = Flask(__name__) app.conf ...
- 洛谷 P5020 货币系统
题目描述 在网友的国度中共有$ n $种不同面额的货币,第 i种货币的面额为 \(a[i]\),你可以假设每一种货币都有无穷多张.为了方便,我们把货币种数为\(n\).面额数组为 \(a[1..n]\ ...
- SpringBoot系列十:SpringBoot整合Redis
声明:本文来源于MLDN培训视频的课堂笔记,写在这里只是为了方便查阅. 1.概念:SpringBoot 整合 Redis 2.背景 Redis 的数据库的整合在 java 里面提供的官方工具包:jed ...