使用kubeadm安装kubernetes,并使用kube-router代替kube-proxy+calico网络。

即:kube-router providing service proxy, firewall and pod networking.

版本:kubernetes v1.20.0

安装kubernetes集群

升级linux系统内核,关闭swap,关闭防火墙,调整内核参数等自己做。

主要安装命令

yum install  kubectl-1.20.0-0.x86_64 kubeadm-1.20.0-0.x86_64 kubelet-1.20.0-0.x86_64

kubeadm init --kubernetes-version=1.20.0 --apiserver-advertise-address=192.168.100.80 --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16

[root@k8s-master ~]# kubeadm init --kubernetes-version=1.20.0 --apiserver-advertise-address=192.168.100.80 --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 192.168.100.96:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.80]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.100.80 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.100.80 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.002683 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 68mv1r.5ljn91n2yms71dth
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.100.80:6443 --token 68mv1r.5ljn91n2yms71dth \
--discovery-token-ca-cert-hash sha256:2a29deaa942a7eacb055f608caa686d9c59cb34abb0365b32c22d959b2327dc8
[root@k8s-master ~]# rm -rf .kube/
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get node

  

安装完成后

安装网络

源文件地址:https://github.com/cloudnativelabs/kube-router/blob/master/daemonset/kubeadm-kuberouter.yaml

kubectl apply -f kubeadm-kuberouter.yaml

安装服务测试

kubectl run nginx --image=nginx --expose --port=80 --image-pull-policy='IfNotPresent'

此时kube-router只提供网络功能,kube-proxy提供的service和防火墙策略

kube-router providing service proxy, firewall and pod networking.

参考地址:https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md

kubectl delete -f kubeadm-kuberouter.yaml

kubectl apply -f kubeadm-kuberouter-all-features.yaml

kubectl -n kube-system delete ds kube-proxy

docker run --privileged  -v /lib/modules:/lib/modules --net=host registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0 kube-proxy --cleanup

kube-router模式使用ipvs rr策略

最后在重启服务器

测试服务

格式化集群

kubectl drain k8s-master --delete-local-data --force --ignore-daemonsets

kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets

kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets

kubectl delete node k8s-master

rm -rf /etc/cni/net.d/10-kuberouter.conflist

kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

ipvsadm -C
reboot

kube-router代替kube-proxy+calico的更多相关文章

  1. 《kubernetes + .net core 》dev ops部分

    目录 1.kubernetes 预备知识 1.1 集群资源 1.1.1 role 1.1.2 namespace 1.1.3 node 1.1.4 persistent volume 1.1.5 st ...

  2. k8s集群部署(2)

    一.利用ansible部署kubernetes准备阶段 1.集群介绍 基于二进制方式部署k8s集群和利用ansible-playbook实现自动化:二进制方式部署有助于理解系统各组件的交互原理和熟悉组 ...

  3. k8s1.20环境搭建部署(二进制版本)

    1.前提知识 1.1 生产环境部署K8s集群的两种方式 kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群 ...

  4. kubernetes入门之skydns

    部署kubernetes dns服务 kubernetes可以为pod提供dns内部域名解析服务.其主要作用是为pod提供可以直接通过service的名字解析为对应service的ip的功能. 部署k ...

  5. CNCF CloudNative Landscape

    cncf landscape CNCF Cloud Native Interactive Landscape 1. App Definition and Development 1. Database ...

  6. kubectl version报did you specify the right host or port

    现象: [root@localhost shell]# kubectl version Client Version: version.Info{Major:", GitVersion:&q ...

  7. centos7.5单机yum安装kubernetes

    1.系统配置 centos7.5 docker 1.13.1 centos7下安装docker 2.关闭防火墙,selinux,swapoff systemctl disable firewalld ...

  8. 二进制方式部署Kubernetes 1.6.0集群(开启TLS)

    本节内容: Kubernetes简介 环境信息 创建TLS加密通信的证书和密钥 下载和配置 kubectl(kubecontrol) 命令行工具 创建 kubeconfig 文件 创建高可用 etcd ...

  9. kubernetes 集群安全配置

    版本:v1.10.0-alpha.3 openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -subj & ...

  10. kubernetes1.5.2集群部署过程--安全模式

    使用https安全模式部署kubernetes集群,能保证集群通讯安全.有效限制非授权用户访问.但部署比非安全模式复杂的多. 本文为etcd.kubernetes集群中各个组件配置证书认证,所有组件通 ...

随机推荐

  1. Eclipse和MyEclipse光标变成黑色块解决办法

    今天偶然发现了一个小技巧,O(∩_∩)O~暂且可以这样说吧,我认为喽. 以前经常在编写程序是不知到碰到键盘上的那个键了,或是那几个组合键了,使得Eclipse里的代码光标变成一个黑色块:在这个状态下, ...

  2. jquery动态赋值id与动态取id方法

    $("#" + packageNameId + "").text(packageName); $("#" + packageNumId + ...

  3. 腾讯一面问我SQL语句中where条件为什么写上1=1

    目录 where后面加"1=1″还是不加 不用where 1=1 在多条件查询的困惑 使用where 1=1 的好处 使用where 1=1 的坏处 where后面加"1=1″还是 ...

  4. DES加密详解

    目录 1 根据输入的秘钥得到16个子秘钥 1.1 大致流程 1.2 利用PC-1从K_0中挑出K_1 1.3 利用PC-2从K_1中挑出16个子秘钥 2 利用16个子秘钥对明文进行加密 2.1 大致流 ...

  5. 记录core中GRPC长连接导致负载均衡不均衡问题 二,解决长连接问题

    题外话: 1.这几天收到蔚来的面试邀请,但是自己没做准备,并且远程面试,还在上班时间,再加上老东家对我还不错.没想着换工作,导致在自己工位上做算法题不想被人看见,然后非常紧张.估计over了.不过没事 ...

  6. dubbo实战之一:准备和初体验

    欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...

  7. HDOJ-6656(数论+逆元)

    Kejin Player HDOJ-6656 设f[i]为从i升级到i+1期望需要的金钱,由于每级都是能倒退或者升级到i+1,所以询问从l,r的期望金钱可以直接前缀和,那么推导每一级升级需要的期望钱也 ...

  8. Redis单机数据库的实现原理

    本文主要介绍Redis的数据库结构,Redis两种持久化的原理:RDB持久化.AOF持久化,以及Redis事件分类及执行原理.最后,分别介绍了单机班Redid客户端和Redis服务器的使用和实现原理. ...

  9. kali msf6 更新及bug处理

    问题描述 Metasploit 漏洞库更新,利用msfupdate命令更新,出现已停止该命令更新,出现如下提示: 利用一句话安装更新,命令如下,安装过程中有部分警告出现 curl https://ra ...

  10. List调用toString()方法后,去除两头的中括号

    import org.apache.commons.lang.StringUtils; public class Test {    public static void main(String[] ...