yum安装k8s集群(kubernetes)
此案例是以一个主,三个node来部署的,当然node可以根据自己情况部署
192.168.1.130 master
192.168.1.131 node1
192.168.1.132 node2
192.168.1.133 node3
合法的
Enable NTP on master and all nodes :
[root@k-master ~]# yum -y install ntp
[root@k-master ~]# systemctl start ntpd
[root@k-master ~]# systemctl enable ntpd
[root@k-master ~]# hwclock --systohc
[root@k-node1 ~]# yum -y install ntp
[root@k-node1 ~]# systemctl start ntpd
[root@k-node1 ~]# systemctl enable ntpd
[root@k-node1 ~]# hwclock --systohc
[root@k-node2 ~]# yum -y install ntp
[root@k-node2 ~]# systemctl start ntpd
[root@k-node2 ~]# systemctl enable ntpd
[root@k-node2 ~]# hwclock --systohc
[root@k-node3 ~]# yum -y install ntp
[root@k-node3 ~]# systemctl start ntpd
[root@k-node3 ~]# systemctl enable ntpd
[root@k-node3 ~]# hwclock --systohc
Add entries in “/etc/hosts” or reccords in your DNS :
[root@k-master ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node1 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node2 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node3 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
Install required RPM :
- On master :
[root@k-master ~]# yum -y install etcd kubernetes
...
...
...
Installed:
etcd.x86_64 0:2.1.1-2.el7 kubernetes.x86_64 0:1.0.3-0.2.gitb9a88a7.el7 Dependency Installed:
audit-libs-python.x86_64 0:2.4.1-5.el7 checkpolicy.x86_64 0:2.1.12-6.el7
docker.x86_64 0:1.8.2-10.el7.centos docker-selinux.x86_64 0:1.8.2-10.el7.centos
kubernetes-client.x86_64 0:1.0.3-0.2.gitb9a88a7.el7 kubernetes-master.x86_64 0:1.0.3-0.2.gitb9a88a7.el7
kubernetes-node.x86_64 0:1.0.3-0.2.gitb9a88a7.el7 libcgroup.x86_64 0:0.41-8.el7
libsemanage-python.x86_64 0:2.1.10-18.el7 policycoreutils-python.x86_64 0:2.2.5-20.el7
python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.7-46.el7
socat.x86_64 0:1.7.2.2-5.el7 Complete!
- On nodes :
[root@k-node1 ~]# yum -y install flannel kubernetes
[root@k-node2 ~]# yum -y install flannel kubernetes
[root@k-node3 ~]# yum -y install flannel kubernetes
Stop the firewall
For for many convenience, we will stop firewalls during this lab :
[root@k-master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node1 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node2 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node3 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
On Kubernetes master
- Configure “etcd” distributed key-value store :
[root@k-master ~]# egrep -v "^#|^$" /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
- Kubernetes API server configuration file :
[root@k-master ~]# egrep -v "^#|^$" /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
- Start all Kubernetes services :
[root@k-master ~]# for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler
> do
> systemctl restart $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service
We have now those LISTEN port :
[root@k-master ~]# netstat -ntulp | egrep -v "ntpd|sshd"
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 2913/kube-scheduler
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 2887/kube-controlle
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 2828/etcd
tcp 0 0 127.0.0.1:7001 0.0.0.0:* LISTEN 2828/etcd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2356/master
tcp6 0 0 :::2379 :::* LISTEN 2828/etcd
tcp6 0 0 :::8080 :::* LISTEN 2858/kube-apiserver
tcp6 0 0 ::1:25 :::* LISTEN 2356/master
- Create “etcd” key :
[root@k-master ~]# etcdctl mk /frederic.wou/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.31.0.0/16"}
[root@k-master ~]# etcdctl ls /frederic.wou --recursive
/frederic.wou/network
/frederic.wou/network/config
[root@k-master ~]# etcdctl get /frederic.wou/network/config
{"Network":"172.17.0.0/16"}
On each minion nodes
- flannel configuration:
[root@k-node1 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
[root@k-node2 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
[root@k-node3 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
- Kubernates :
[root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
[root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
[root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
- kubelet :
[root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node1"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
[root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node2"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
[root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node3"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
- Start all services :
[root@k-node1 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k-node2 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k-node3 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Kubernetes is now ready
[root@k-master ~]# kubectl get nodes
NAME LABELS STATUS
192.168.1.131 kubernetes.io/hostname=192.168.1.131 Ready
192.168.1.132 kubernetes.io/hostname=192.168.1.132 Ready
192.168.1.133 kubernetes.io/hostname=192.168.1.133 Ready
Troubleshooting
Unable to start Docker on minion nodes
[root@k-node1 ~]# systemctl start docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details
Check “ntp” service :
[root@k-node1 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+173.ip-37-59-12 36.224.68.195 2 u - 64 7 32.539 -0.030 0.477
*moz75-1-78-194- 213.251.128.249 2 u 4 64 7 30.108 -0.988 0.967
-ntp.tuxfamily.n 138.96.64.10 2 u 67 64 7 25.934 -1.495 0.504
+x1.f2tec.de 10.2.0.1 2 u 62 64 7 32.307 -0.044 0.466
Is “flanneld” up & running ?
[root@k-node1 ~]# ip addr show dev flannel0
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 172.17.85.0/16 scope global flannel0
valid_lft forever preferred_lft forever
Is this node able to connect to “etcd” master :
[root@k-node1 ~]# curl -s -L http://192.168.1.130:2379/version
{"etcdserver":"2.1.1","etcdcluster":"2.1.0"}[root@k-node1 ~]
Is “kube-proxy” service running ?
[root@k-node1 ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2016-02-03 14:50:25 CET; 1min 0s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2072 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─2072 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.1.130:8080 Feb 03 14:50:25 k-node1 systemd[1]: Started Kubernetes Kube-Proxy Server.
Feb 03 14:50:25 k-node1 systemd[1]: Starting Kubernetes Kube-Proxy Server...
Try to manually start Docker daemon :
[root@k-node1 ~]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=172.17.85.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=true --mtu=1472 "
[root@k-node1 ~]# /usr/bin/docker daemon -D --selinux-enabled --bip=172.17.85.1/24 --ip-masq=true --mtu=1472
...
...
...
INFO[0001] Docker daemon commit=a01dc02/1.8.2 execdriver=native-0.2 graphdriver=devicemapper version=1.8.2-el7.centos
#转自http://frederic-wou.net/kubernetes-first-step-on-centos-7-2/
yum安装k8s集群(kubernetes)的更多相关文章
- yum安装k8s集群
k8s的安装有多种方式,如yum安装,kubeadm安装,二进制安装等.本文是入门系列,只是为了快速了解k8s的原理和工作过程,对k8s有一个快速的了解,这里直接采用yum安装 的1.5.2为案例进行 ...
- kubernetes教程第一章-kubeadm高可用安装k8s集群
目录 Kubeadm高可用安装k8s集群 kubeadm高可用安装1.18基本说明 k8s高可用架构解析 kubeadm基本环境配置 kubeadm基本组件安装 kubeadm集群初始化 高可用Mas ...
- [k8s]kubespray(ansible)自动化安装k8s集群
kubespray(ansible)自动化安装k8s集群 https://github.com/kubernetes-incubator/kubespray https://kubernetes.io ...
- 冰河教你一次性成功安装K8S集群(基于一主两从模式)
写在前面 研究K8S有一段时间了,最开始学习K8S时,根据网上的教程安装K8S环境总是报错.所以,我就改变了学习策略,先不搞环境搭建了.先通过官网学习了K8S的整体架构,底层原理,又硬啃了一遍K8S源 ...
- Blazor+Dapr+K8s微服务之基于WSL安装K8s集群并部署微服务
前面文章已经演示过,将我们的示例微服务程序DaprTest1部署到k8s上并运行.当时用的k8s是Docker for desktop 自带的k8s,只要在Docker for deskto ...
- 使用kubeadm安装k8s集群故障处理三则
最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...
- Kubernetes实战指南(三十四): 高可用安装K8s集群1.20.x
@ 目录 1. 安装说明 2. 节点规划 3. 基本配置 4. 内核配置 5. 基本组件安装 6. 高可用组件安装 7. 集群初始化 8. 高可用Master 9. 添加Node节点 10. Cali ...
- Kubernetes全栈架构师(Kubeadm高可用安装k8s集群)--学习笔记
目录 k8s高可用架构解析 Kubeadm基本环境配置 Kubeadm系统及内核升级 Kubeadm基本组件安装 Kubeadm高可用组件安装 Kubeadm集群初始化 高可用Master及Token ...
- Kubernetes全栈架构师(二进制高可用安装k8s集群部署篇)--学习笔记
目录 二进制高可用基本配置 二进制系统和内核升级 二进制基本组件安装 二进制生成证书详解 二进制高可用及etcd配置 二进制K8s组件配置 二进制使用Bootstrapping自动颁发证书 二进制No ...
随机推荐
- 20155324 实验5 MSF基础应用
20155324 实验5 MSF基础应用 ms08_067 用search命令,搜索与ms08_067相关的模块,如图: 服务器信息块(SMB)是一个网络文件共享协议,它允许应用程序和终端用户从远端的 ...
- matlab 常用函数
Matlab常用函数 Matlab的内部常数 eps 浮点相对精度 pi 圆周率 exp 自然对数的底数e i 或j 虚数单位 Inf或 inf 无穷大 Matlab概率密度函数 ...
- JSR-303 数据校验学习
一.JSR-303简介JSR-303 是 JAVA EE 6 中的一项子规范,叫做 Bean Validation,官方参考实现是Hibernate Validator. 此实现与 Hibernate ...
- hihocoder 1175
拓扑排序 hihocoder 1175 拓扑只适用于 有向无环图中,这个指的是 1.有向的,不是那种双向可走的 2.无环,并不是不存在环,而是一定要有一个没有其他点指向这个点的点, 题目大意:一个有向 ...
- Harbor修改/data目录位置
由于harbor默认数据存储位置在/data目录,且修改配置文件操作较为复杂,故这里使用软连接的方式将/data目录文件夹内容映射到/app目录下. ln -s /app/harbor/data/ d ...
- OPENSSL_Applink 错误
原因 : 程序太老, 调用了参数为 FILE * 类型的 api. 解决方式: 1. windows exe, 可直接#include<openssl/applink.c> // ext ...
- centOS7在VirtualBox中装好后的网络连接问题
1. 环境 物理机OS:Windows 7 虚拟机:VirtualBox 虚拟机OS:CentOS7 2. 虚拟机网络设置 (该部分内容参考于网络,未深究原因,待后续研究补充) 网卡1设置如下图: 网 ...
- Python 正则表达式 flags 参数
flags参数 re.I IGNORECASE 忽略字母大小写 re.L LOCALE 影响 “w, “W, “b, 和 “B,这取决于当前的本地化设置. re.M MULTILINE 使用本标志后, ...
- 对oracle用户创建asm磁盘
--root用户执行vi /etc/sysctl.conf #Install oracle settingfs.aio-max-nr = 1048576fs.file-max = 6815744#ke ...
- java基础知识三 流
Java 流(Stream).文件(File)和IOJava.io 包几乎包含了所有操作输入.输出需要的类.所有这些流类代表了输入源和输出目标. Java.io 包中的流支持很多种格式,比如:基本类型 ...