Kubernetes集群的安装部署
此文参照https://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html,并根据实操过程略作修改。
1、环境介绍及准备:
1.1 物理机操作系统
物理机操作系统采用Centos7.3 64位,细节如下。
- [root@k8s-master ~]# uname -a
- Linux k8s-master 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
- [root@k8s-master ~]# cat /etc/redhat-release
- CentOS Linux release 7.6.1810 (Core)
1.2 主机信息
本文准备了三台机器用于部署k8s的运行环境,细节如下:
节点及功能 |
主机名 |
IP |
Master、etcd、registry |
K8s-master |
192.168.44.60 |
Node1 |
K8s-slave01 |
192.168.44.61 |
Node2 |
K8s-slave02 |
192.168.44.62 |
另外三台机器做了ssh免密登录,免密登录示例
并且做如下配置(三台机器都需要)
- [root@k8s-master ~]# cat /etc/hosts
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 192.168.44.60 etcd
- 192.168.44.60 registry
- 192.168.44.60 k8s-master
- 192.168.44.61 k8s-slave01
- 192.168.44.62 k8s-slave02
1.3 关闭三台机器上的防火墙
- systemctl disable firewalld.service
- systemctl stop firewalld.service
2、部署etcd
k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:
- [root@k8s-master ~]# yum install etcd -y
yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息:
- [root@k8s-master ~]# vim /etc/etcd/etcd.conf
- # [member]
- ETCD_NAME=master
- ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
- #ETCD_WAL_DIR=""
- #ETCD_SNAPSHOT_COUNT="10000"
- #ETCD_HEARTBEAT_INTERVAL="100"
- #ETCD_ELECTION_TIMEOUT="1000"
- #ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
- ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
- #ETCD_MAX_SNAPSHOTS="5"
- #ETCD_MAX_WALS="5"
- #ETCD_CORS=""
- #
- #[cluster]
- #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
- # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
- #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
- #ETCD_INITIAL_CLUSTER_STATE="new"
- #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
- ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
- #ETCD_DISCOVERY=""
- #ETCD_DISCOVERY_SRV=""
- #ETCD_DISCOVERY_FALLBACK="proxy"
- #ETCD_DISCOVERY_PROXY=""
启动并验证状态
- [root@k8s-master ~]# systemctl start etcd
- [root@k8s-master ~]# etcdctl set testdir/testkey0 0
- 0
- [root@k8s-master ~]# etcdctl get testdir/testkey0
- 0
- [root@k8s-master ~]# etcdctl -C http://etcd:4001 cluster-health
- member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
- cluster is healthy
- [root@k8s-master ~]# etcdctl -C http://etcd:2379 cluster-health
- member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
- cluster is healthy
3、部署master
3.1 安装Docker
- [root@k8s-master ~]# yum install docker
配置Docker配置文件,使其允许从registry中拉取镜像。
- [root@k8s-master ~]# vim /etc/sysconfig/docker
- # /etc/sysconfig/docker
- # Modify these options if you want to change the way the docker daemon runs
- OPTIONS='--selinux-enabled=false --log-driver=journald --signature-verification=false'
- if [ -z "${DOCKER_CERT_PATH}" ]; then
- DOCKER_CERT_PATH=/etc/docker
- fi
- OPTIONS='--insecure-registry registry:5000'
ps:上面这个配置(OPTIONS='--insecure-registry registry:5000'),是配置使用本地镜像库,本地镜像库的搭建和使用可参照这篇文章:Docker私有仓库的搭建及使用
设置开机自启动并开启服务
- [root@k8s-master ~]# chkconfig docker on
- [root@k8s-master ~]# service docker start
3.2 安装kubernets
- [root@k8s-master ~]# yum install kubernetes
3.3 配置并启动kubernetes
在kubernetes master上需要运行以下组件:
Kubernets API Server
Kubernets Controller Manager
Kubernets Scheduler
相应的要更改以下几个配置中带颜色部分信息:
3.3.1 /etc/kubernetes/apiserver
- [root@k8s-master ~]# vim /etc/kubernetes/apiserver
- ###
- # kubernetes system config
- #
- # The following values are used to configure the kube-apiserver
- #
- # The address on the local server to listen to.
- KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
- # The port on the local server to listen on.
- KUBE_API_PORT="--port=8080"
- # Port minions listen on
- # KUBELET_PORT="--kubelet-port=10250"
- # Comma separated list of nodes in the etcd cluster
- KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
- # Address range to use for services
- KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
- # default admission control policies
- #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
- KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
- # Add your own!
- KUBE_API_ARGS=""
3.3.2 /etc/kubernetes/config
- [root@k8s-master ~]# vim /etc/kubernetes/config
- ###
- # kubernetes system config
- #
- # The following values are used to configure various aspects of all
- # kubernetes services, including
- #
- # kube-apiserver.service
- # kube-controller-manager.service
- # kube-scheduler.service
- # kubelet.service
- # kube-proxy.service
- # logging to stderr means we get it in the systemd journal
- KUBE_LOGTOSTDERR="--logtostderr=true"
- # journal message level, 0 is debug
- KUBE_LOG_LEVEL="--v=0"
- # Should this cluster be allowed to run privileged docker containers
- KUBE_ALLOW_PRIV="--allow-privileged=false"
- # How the controller-manager, scheduler, and proxy find the apiserver
- KUBE_MASTER="--master=http://k8s-master:8080"
启动服务并设置开机自启动
- [root@k8s-master ~]# systemctl enable kube-apiserver.service
- [root@k8s-master ~]# systemctl start kube-apiserver.service
- [root@k8s-master ~]# systemctl enable kube-controller-manager.service
- [root@k8s-master ~]# systemctl start kube-controller-manager.service
- [root@k8s-master ~]# systemctl enable kube-scheduler.service
- [root@k8s-master ~]# systemctl start kube-scheduler.service
4、部署node(注意,两台slave的node机器都需要操作一遍)
4.1 安装docker
参见3.1
4.2 安装kubernets
两台slave的node机器上分别yum安装
- yum install kubernetes
4.3 配置并启动kubernetes
在kubernetes node上需要运行以下组件:
Kubelet
Kubernets Proxy
相应的要更改以下几个配置文中带颜色部分信息:
4.3.1 /etc/kubernetes/config
- [root@K8s-slave01 ~]# vim /etc/kubernetes/config
- ###
- # kubernetes system config
- #
- # The following values are used to configure various aspects of all
- # kubernetes services, including
- #
- # kube-apiserver.service
- # kube-controller-manager.service
- # kube-scheduler.service
- # kubelet.service
- # kube-proxy.service
- # logging to stderr means we get it in the systemd journal
- KUBE_LOGTOSTDERR="--logtostderr=true"
- # journal message level, 0 is debug
- KUBE_LOG_LEVEL="--v=0"
- # Should this cluster be allowed to run privileged docker containers
- KUBE_ALLOW_PRIV="--allow-privileged=false"
- # How the controller-manager, scheduler, and proxy find the apiserver
- KUBE_MASTER="--master=http://k8s-master:8080"
4.3.2 /etc/kubernetes/kubelet
- [root@K8s-slave01 ~]# vim /etc/kubernetes/kubelet
- ###
- # kubernetes kubelet (minion) config
- # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
- KUBELET_ADDRESS="--address=0.0.0.0"
- # The port for the info server to serve on
- # KUBELET_PORT="--port=10250"
- # You may leave this blank to use the actual hostname 注意修改成自己的节点名称
- KUBELET_HOSTNAME="--hostname-override=k8s-slave01"
- # location of the api-server 修改成自己的主节点名称
- KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
- # pod infrastructure container 记住这个地方,后面会对此讲解
- KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
- # Add your own!
- KUBELET_ARGS=""
启动服务并设置开机自启动
- [root@k8s-master ~]# systemctl enable kubelet.service
- [root@k8s-master ~]# systemctl start kubelet.service
- [root@k8s-master ~]# systemctl enable kube-proxy.service
- [root@k8s-master ~]# systemctl start kube-proxy.service
4.4 查看状态
在master上查看集群中节点及节点状态
- [root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
- NAME STATUS AGE
- k8s-slave01 Ready 39s
- k8s-slave02 Ready 45s
- [root@k8s-master ~]# kubectl get nodes
- NAME STATUS AGE
- k8s-slave01 Ready 50s
- k8s-slave02 Ready 56s
至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。
5、创建覆盖网络——Flannel
5.1 安装Flannel
在master、node上均执行如下命令,进行安装
- [root@k8s-master ~]# yum install flannel
5.2 配置Flannel
master、node上均编辑/etc/sysconfig/flanneld,修改红色部分
- [root@k8s-master ~]# vi /etc/sysconfig/flanneld
- # Flanneld configuration options
- # etcd url location. Point this to the server where etcd runs
- FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
- # etcd config key. This is the configuration key that flannel queries
- # For address range assignment
- FLANNEL_ETCD_PREFIX="/atomic.io/network"
- # Any additional options that you want to pass
- #FLANNEL_OPTIONS=""
5.3 配置etcd中关于flannel的key
Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,值里面的ip可以参照ifconfig列出的docker0一项的ip,错误的话启动就会出错)
值参照如下
- [root@k8s-slave01 ~]# ifconfig
- docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1472
- inet 172.17.78.1 netmask 255.255.255.0 broadcast 0.0.0.0
- inet6 fe80::42:d9ff:fe56:982c prefixlen 64 scopeid 0x20<link>
.....
执行下面命令
- [root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.17.0.1/16" }'
- { "Network": "172.17.0.1/16" }
5.4 启动
启动Flannel之后,需要依次重启docker、kubernete。
在master执行:
- systemctl enable flanneld.service
- systemctl start flanneld.service
- service docker restart
- systemctl restart kube-apiserver.service
- systemctl restart kube-controller-manager.service
- systemctl restart kube-scheduler.service
在node上执行:
- systemctl enable flanneld.service
- systemctl start flanneld.service
- service docker restart
- systemctl restart kubelet.service
- systemctl restart kube-proxy.service
至此集群基本搭建完毕,但是一般企业里面都需要一个web的ui页面,所以下文讲解如何在 集群的基础上搭建ui界面。
Kubernetes集群的安装部署的更多相关文章
- kubernetes 集群的安装部署
本文来自我的github pages博客http://galengao.github.io/ 即www.gaohuirong.cn 摘要: 首先kubernetes得官方文档我自己看着很乱,信息很少, ...
- Istio(二):在Kubernetes(k8s)集群上安装部署istio1.14
目录 一.模块概览 二.系统环境 三.安装istio 3.1 使用 Istioctl 安装 3.2 使用 Istio Operator 安装 3.3 生产部署情况如何? 3.4 平台安装指南 四.Ge ...
- Ganglia监控Hadoop集群的安装部署[转]
Ganglia监控Hadoop集群的安装部署 一. 安装环境 Ubuntu server 12.04 安装gmetad的机器:192.168.52.105 安装gmond的机 器:192.168.52 ...
- (转)linux下weblogic12c集群的安装部署
本文介绍linux下weblogic12c集群的安装部署,版本12c,其他版本操作会有所不同,但其大体操作基本都是一样的 关于weblogic的集群,在此就不多做介绍了,如果有不了解的朋友可以百度搜索 ...
- Apache Hadoop集群离线安装部署(三)——Hbase安装
Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS.YARN.MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html Apac ...
- Apache Hadoop集群离线安装部署(二)——Spark-2.1.0 on Yarn安装
Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS.YARN.MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html Apac ...
- Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS、YARN、MR)安装
虽然我已经装了个Cloudera的CDH集群(教程详见:http://www.cnblogs.com/pojishou/p/6267616.html),但实在太吃内存了,而且给定的组件版本是不可选的, ...
- K8S从入门到放弃系列-(16)Kubernetes集群Prometheus-operator监控部署
Prometheus Operator不同于Prometheus,Prometheus Operator是 CoreOS 开源的一套用于管理在 Kubernetes 集群上的 Prometheus 控 ...
- 二,kubernetes集群的安装初始化
目录 部署 集群架构示意图 部署环境 kubernetes集群部署步骤 基础环境 基础配置 安装基础组件 配置yum源 安装组件 初始化 master 设置docker和kubelet为自启动(nod ...
随机推荐
- copy assign retain 修饰属性的set 方法
@property (nonatomic,retain) NSString * name; - (void)setName:(NSString*)name { [name retain]; 把传进 ...
- Beta阶段第1周/共2周 Scrum立会报告+燃尽图 03
作业要求与 [https://edu.cnblogs.com/campus/nenu/2018fall/homework/2284] 相同 版本控制:https://git.coding.net/li ...
- 分布式机器学习系统笔记(一)——模型并行,数据并行,参数平均,ASGD
欢迎转载,转载请注明:本文出自Bin的专栏blog.csdn.net/xbinworld. 技术交流QQ群:433250724,欢迎对算法.技术.应用感兴趣的同学加入. 文章索引::"机器学 ...
- 第24课 #pragma使用分析
#pragma是C语言留给编译器厂商进行扩展用的. 这个关键字在不同的编译器之间也许是不能够移植的. #pragma简介 #pragma message #pragma message打印的消息并不代 ...
- MongoDB数据库的特点以及结构
mongodb标签:非关系型数据库 文档型数据库 最像关系型的非关系型数据库 特点: 1. 由c++编写的数据库管理系统 2. 支持丰富的增删改查功能 3. 支持丰富的 ...
- HDU 2050:折线分割平面
折线分割平面 Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)Total Subm ...
- WIN 10环境下JDK的安装和环境配置
在做测试的过程中,诸如Selenium.Appium.Macaca.Airtest.RobotFramework.Jmeter等框架或工具都需要用到一样基础的环境JAVA JDK.最近刚好换了电脑,就 ...
- 2018-2019-1 20165212 《信息安全系统设计基础》第八周学习总结(pwd)
2018-2019-1 20165212 <信息安全系统设计基础>第八周学习总结 一.知识点总结 1.三种并发方式 构造并发程序的方法有三种: 进程 线程 I/O多路复用 进程:用内核来调 ...
- 《DSP using MATLAB》Problem 2.19
代码: %% ------------------------------------------------------------------------ %% Output Info about ...
- 基于ffmpeg静态库的应用开发
最近几天在试着做基本ffmpeg静态库的开发,只有main中包含了avdevice_register_all 或avfilter_register_all,编译就通不过,undefined refre ...