[原]使用kubeadm部署kubernetes(一)
####################### 以下为声明 #####################
在公众号 木子李的菜田
输入关键词: k8s
有系列安装文档
此文档是之前做笔记在两台机上进行的实践,kubernetes处于不断开发阶段
不能保证每个步骤都能准确到同步开发进度,所以如果安装部署过程中有问题请尽量google
本文章分为两部分:
按照下面步骤能得到什么?
1.两台主机:一台为server ,另外一台为node节点
2.在node节点上安装部署dashboard插件 并以kubernetes dashboard的方式呈现
3.解决遇到的问题
systemctl stop firewalld && systemctl disable firewalld
setenforce
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
modprobe br_netfilter
lsmod | grep br_netfilter
rpm -qa |grep bridge-utils
swapoff -a
vim /etc/fstab
在这行前面添加#
#/dev/mapper/centos-swap swap swap defaults 0 0
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
【注意】上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm 如果以上前提条件如果不满足,
则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式
yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast
或者使用
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
【注意】
- kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version.
yum list docker-ce.x86_64 --showduplicates |sort -r
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable yum install docker-ce-18.06.3.ce-3.el7 -y systemctl start docker && systemctl enable docker
[root@k8s-master flannel]# cat <<EOF >/etc/docker/daemon.json
{
"registry-mirrors": ["https://72idtxd8.mirror.aliyuncs.com"]
}
EOF systemctl reset-failed docker.service && systemctl restart docker.service
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
sudo yum makecache fast
yum install -y kubelet kubeadm kubectl
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.
docker info | grep -i cgroup
Cgroup Driver: cgroupfs
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file
"/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
%%%%% 下面是官方文档参考 When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/kubeadm-flags.env file during runtime.
If you are using a different CRI, you have to modify the file /etc/default/kubelet with your cgroup-driver value, like so: KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
This file will be used by kubeadm init and kubeadm join to source extra user defined arguments for the kubelet.
Please mind, that you only have to do that if the cgroup driver of your CRI is not cgroupfs, because that is the default value in the kubelet already.
Restarting the kubelet is required: systemctl daemon-reload
systemctl restart kubelet
12【这步在每个node上(包括master)都要进行操作】
刚才安装的是k8s软件,现在来处理k8s需要使用的镜像【可能在kubelet init的时候会遇到如下的错误】:
Failed to pull image "quay.io/coreos/flannel:v0.11.0-amd64" 等等
为了能解决这些问题,可以提前处理好镜像问题,使用kubeadm config images list 查看需要处理的基础镜像问题,
之所以为基础镜像是因为在kubelet init的时候就需要用到的镜像,后面还会有其他插件安装时候需要的镜像,当遇到
问题时再看看是什么镜像需要存在。
[root@k8s-master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.4
k8s.gcr.io/kube-controller-manager:v1.13.4
k8s.gcr.io/kube-scheduler:v1.13.4
k8s.gcr.io/kube-proxy:v1.13.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
根据上面查出来的镜像名称,我写了个脚本来处理:
编辑拉取脚本:vim pull_image.sh
#!/bin/bash
#### 基础镜像 #####
images=(
kube-apiserver:v1.14.1
kube-controller-manager:v1.14.1
kube-scheduler:v1.14.1
kube-proxy:v1.14.1
pause:3.1
etcd:3.3.10
coredns:1.3.1
kubernetes-dashboard-amd64:v1.10.1
)
for imageName in ${images[@]};do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done ### 插件镜像 network: flannel image ###
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
### 运行时插件镜像pause image ###
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
chmod a+x pull_image.sh
./pull_image.sh
docker images
13【这步骤在master机器上执行】
~]# kubelet --version
Kubernetes v1.14.1
初始化kubernetes master 机器:
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1 > kube_init.log
10.244.0.0/16 是使用flannel网络要设置的ip网络地址(官网介绍一定要使用这个)
【官网介绍】
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.50.10.10 --kubernetes-version=v1.13.4
192.168.0.166 为maser的主机ip地址
v1.14.1 是上面查出来的kubernetes 版本号
下面是我的操作记录:仅供参考:
[root@rancher ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [rancher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.166]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.505486 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node rancher as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node rancher as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: dije5w.ipijm49d8c9isxie
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.166:6443 --token dije5w.ipijm49d8c9isxie \
--discovery-token-ca-cert-hash sha256:c1aaaafc79d85141e60a73c43562e6b06cb8d9cdc24cc0a649c8d6e0f24c5f42
14【在master上执行下面的操作】
以下完整步骤内容请在公众号: 木子李的菜田 输入: k8s
为了能再node上正常操作kubelet,需要将master上的admin配置文件保存到每个node节点上
。。。。。
【在node上执行下面操作】
。。。。。
下面是非常很重要的一步!!!!
。。。。。。
【注意注意】如果在第一次kubeadm init的时候失败了,需要重新kubeadm 进行重新初始化,需要做下面的操作:
。。。。。。
检查kubernetes是否安装正确:
[root@master ~]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-srtht 0/1 Pending 0 3h
coredns-86c58d9df4-tl7ww 0/1 Pending 0 3h
etcd-master 1/1 Running 0 179m
kube-apiserver-master 1/1 Running 0 179m
kube-controller-manager-master 1/1 Running 0 3h
kube-proxy-2sdmn 1/1 Running 1 3h
kube-proxy-ln5tk 1/1 Running 1 173m
kube-scheduler-master 1/1 Running 0 3h
发现很多都还不正确,这个时候先别急,继续第15和16步
15 【只在master上进行操作】配置CNI-----这步跟网络息息相关
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
添加下面的内容
。。。。。。
16安装flannel
#]kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
17最后在master和node上检查:
全部是Running 才算是真正的正确部署
[root@master ~]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-srtht 1/1 Running 0 3h16m
coredns-86c58d9df4-tl7ww 1/1 Running 0 3h16m
etcd-master 1/1 Running 1 3h15m
kube-apiserver-master 1/1 Running 1 3h15m
kube-controller-manager-master 1/1 Running 1 3h16m
kube-flannel-ds-amd64-7mntm 1/1 Running 0 12m
kube-flannel-ds-amd64-bxzdn 1/1 Running 0 12m
kube-proxy-2sdmn 1/1 Running 1 3h16m
kube-proxy-ln5tk 1/1 Running 1 3h9m
kube-scheduler-master 1/1 Running 1 3h16m
End
文章中有些地方的坑已经通过相关步骤填了,如果还遇到其他问题 请google一下或者使用
使用describe 查看原因 特别是要注意这个原因是在master还是在node节点上。
[原]使用kubeadm部署kubernetes(一)的更多相关文章
- 附025.kubeadm部署Kubernetes更新证书
一 查看证书 1.1 查看过期时间-方式一 1 [root@master01 ~]# tree /etc/kubernetes/pki/ 2 [root@master01 ~]# for tls in ...
- 使用kubeadm部署Kubernetes v1.13.3
kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: 1. 安装要求 在开始之前,部署Kubernetes集群 ...
- [转帖]CentOS 7 使用kubeadm 部署 Kubernetes
CentOS 7 使用kubeadm 部署 Kubernetes 关闭swap 执行swapoff临时关闭swap. 重启后会失效,若要永久关闭,可以编辑/etc/fstab文件,将其中swap分 ...
- 02 . Kubeadm部署Kubernetes及简单应用
kubeadm部署Kubernetes kubeadm简介 # kubeadm是一位高中生的作品,他叫Lucas Kaldstrom,芬兰人,17岁用业余时间完成的一个社区项目: # kubeadm的 ...
- Kubeadm部署Kubernetes
Kubeadm部署Kubernetes 1.环境准备 主机名 IP 说明 宿主机系统 k8s-master 10.0.0.101 Kubernetes集群的master节点 Ubuntu2004 k8 ...
- 使用kubeadm部署Kubernetes集群
一.环境架构与部署准备 1.集群节点架构与各节点所需安装的服务如下图: 2.安装环境与软件版本: Master: 所需软件:docker-ce 17.03.kubelet1.11.1.kubeadm1 ...
- kubeadm部署Kubernetes集群
Preface 通过kubeadm管理工具部署Kubernetes集群,相对离线包的二进制部署集群方式而言,更为简单与便捷.以下为个人学习总结: 两者区别在于前者部署方式使得大部分集群组件(Kube- ...
- 使用kubernetes 官网工具kubeadm部署kubernetes(使用阿里云镜像)
系列目录 kubernetes简介 Kubernetes节点架构图: kubernetes组件架构图: 准备基础环境 我们将使用kubeadm部署3个节点的 Kubernetes Cluster,整体 ...
- 解决kubeadm部署kubernetes集群镜像问题
kubeadm 是kubernetes 的集群安装工具,能够快速安装kubernetes 集群.kubeadm init 命令默认使用的docker镜像仓库为k8s.gcr.io,国内无法直接访问,需 ...
随机推荐
- 在kubernetes集群中部署ElasticSearch集群--ECK
Elastic Cloud on Kubernetes (ECK) ---ECK是这个说法哈. 基本于k8s operator的官方实现. URL: https://www.elastic.co/gu ...
- AI人工智能-Python实现前后端人机聊天对话
[前言] AI 在人工智能进展的如火如荼的今天,我们如果不尝试去接触新鲜事物,马上就要被世界淘汰啦~ 本文拟使用Python开发语言实现类似于WIndows平台的“小娜”,或者是IOS下的“Siri” ...
- QTAction Editor的简单使用(简洁明了)
1. 打开UI界面,选择如下图的模式 2. 添加资源名称并选择相应的资源,点击OK 3. 相应的资源就建立好了 4. 添加好的资源可以直接拖到MainWindow中
- HDU6625: three arrays (字典树处理xor)
题意:给出A数组,B数组,你可以对A和B分别进行重排列,使得C[i]=A[i]^B[i]的字典序最小. 思路:对于这类题,显然需要建立字典树,然后某种形式取分治,或者贪心. 假设现在有了两颗字典树A ...
- Oracle创建视图 及 授权
创建视图语句: CREATE VIEW GRM_PROFIT_VIEW AS SELECT ID, DEPT_CODE, DEPT_NAME, YMONTH, PROJECT_NAME, PROJEC ...
- 如果wordpress分类只有一篇文章则直接跳转到文章页
每个项目的需求都不一样,比如最近ytkah的客户提出如果wordpress分类只有一篇文章则直接跳转到文章页,这个实现起来不会很麻烦,几行代码就能搞定,下面就来一起看看吧.打开主题的function. ...
- Word中同样行间距,同样字号,同样字体,但是肉眼看起来行距不一样
感谢博主转载:https://blog.csdn.net/hongweigg/article/details/47130009 困扰了我好久,直接上解决办法: 然后选择 自定义页边距 选择 无网络(N ...
- 20-2 树莓派搭建服务器 Tornado Web服务器
Drive.google.com/drive/folders/1ahbeoEHkjxoo4NV1wReOmpoRWbl448z- 1.Tornado简介 Tornado一款使用 Python 编写的, ...
- Edge Beta 进入无痕模式 快捷方式
“浏览器路径” -InPrivate 在快捷方式的路径后加 -InPrivate 就可以了
- 部署django到服务器
部署 服务器环境配置 在本地的虚拟环境中,项目根目录下,执行命令收集所有的包 pip freeze > plist.txt 安装并创建虚拟环境,如已创建则跳过此步 sudo apt-get in ...