最近听我朋友说他们公司准备上云,全线把服务迁到 k8s 上面,一下感觉,我们就 lower 了不少,之前服务器一直跑的就是 docker ,想想弄到 k8s 应该还是没有啥,于是我们也开始改造了

参考了不少文档,有兴趣的可以读原文 https://kubernetes.io/docs/setup/independent/install-kubeadm/

https://blog.csdn.net/networken/article/details/84991940

https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-with-kubeadm.html

k8s 服务规划
主机名 ip 角色
 kmaster 192.168.9.88 master
knode1 192.168.9.81 node
konde2 192.168.9.82 node

分别在三台机器 中执行


cat >> /etc/hosts <<EOF
192.168.9.88 kmaster
192.168.9.81 knode1
192.168.9.82 knode2
EOF

setenforce 0

sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

[root@knode1 ~]# cat >> /etc/hosts <<EOF
> 192.168.9.88 kmaster
> 192.168.9.81 knode1
> 192.168.9.82 knode2
> EOF

[root@knode1 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

[root@knode1 ~]# swapoff -a
[root@knode1 ~]# yes | cp /etc/fstab /etc/fstab_bak
[root@knode1 ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF #执行脚本
chmod /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#kube-proxy开启ipvs
yum install ipset ipvsadm -y

设置 docker 的源

#配置docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装指定版本,这里安装18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl start docker && systemctl enable docker

安装 kubeadmin kube kubctl

#配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF #在所有节点上安装指定版本 kubelet、kubeadm 和 kubectl
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1 #启动kubelet服务
systemctl enable kubelet && systemctl start kubelet

  部署 master

kubeadm init \
--apiserver-advertise-address=192.168.9.88 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.13.1 \
--pod-network-cidr=10.244.0.0/16

  注意这里执行初始化用到了- -image-repository选项,指定初始化需要的镜像源从阿里云镜像仓库拉取

 这里有点慢,如果 输出下面这些,就可以了

[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.9.88]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.9.88 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.9.88 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.007105 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kmaster" as an annotation
[mark-control-plane] Marking the node kmaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: hpvjuo.divmu5zdcqb7oysy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
as root: kubeadm join 192.168.9.88:6443 --token hpvjuo.divmu5zdcqb7oysy --discovery-token-ca-cert-hash sha256:a5e36c51c68ad1f1e07286c8c9c58bf5b8794c25182b18b15c1dcb6e99462eb2

  

#创建普通用户并设置密码123456
useradd k8s && echo "k8s:123456" | chpasswd k8s #追加sudo权限,并配置sudo免密
sed -i '/^root/a\k8s ALL=(ALL) NOPASSWD:ALL' /etc/sudoers
[root@kmaster ~]# su - k8s
[k8s@kmaster ~]$
[k8s@kmaster ~]$
[k8s@kmaster ~]$
[k8s@kmaster ~]$
[k8s@kmaster ~]$ mkdir -p $HOME/.kube
[k8s@kmaster ~]$
[k8s@kmaster ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@kmaster ~]$
[k8s@kmaster ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

[k8s@kmaster ~]$ #启用 kubectl 命令自动补全功能(注销重新登录生效)
[k8s@kmaster ~]$ echo "source <(kubectl completion bash)" >> ~/.bashrc


[k8s@kmaster ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd- Healthy {"health": "true"}
[k8s@kmaster ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[k8s@kmaster ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster NotReady master 6m51s v1.13.1
[k8s@kmaster ~]$ kubectl describe node kmaster
Name: kmaster
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=kmaster
node-role.kubernetes.io/master=

  查看 pod的情况

[k8s@kmaster ~]$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-78d4cf999f-l9f7v 0/1 Pending 0 2m32s <none> <none> <none> <none>
coredns-78d4cf999f-n8g4g 0/1 Pending 0 2m32s <none> <none> <none> <none>
etcd-kmaster 1/1 Running 0 6m51s 192.168.9.88 kmaster <none> <none>
kube-apiserver-kmaster 1/1 Running 0 6m48s 192.168.9.88 kmaster <none> <none>
kube-controller-manager-kmaster 1/1 Running 0 6m54s 192.168.9.88 kmaster <none> <none>
kube-proxy-57lvg 1/1 Running 0 7m32s 192.168.9.88 kmaster <none> <none>
kube-scheduler-kmaster 1/1 Running 0 6m45s 192.168.9.88 kmaster <none> <none>

  部署网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  

[k8s@kmaster ~]$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-78d4cf999f-l9f7v 1/1 Running 0 9m18s 10.244.0.3 kmaster <none> <none>
coredns-78d4cf999f-n8g4g 1/1 Running 0 9m18s 10.244.0.2 kmaster <none> <none>
etcd-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none>
kube-apiserver-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none>
kube-controller-manager-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none>
kube-flannel-ds-amd64-dkb2t 1/1 Running 0 2m44s 192.168.9.88 kmaster <none> <none>
kube-proxy-57lvg 1/1 Running 0 14m 192.168.9.88 kmaster <none> <none>
kube-scheduler-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none>

  至此,Kubernetes 的 Master 节点就部署完成了。如果你只需要一个单节点的 Kubernetes,现在你就可以使用了。

 

部署worker节点

[root@knode1 ~]# kubeadm join 192.168.9.88:6443 --token hpvjuo.divmu5zdcqb7oysy --discovery-token-ca-cert-hash sha256:a5e36c51c68ad1f1e07286c8c9c58bf5b8794c25182b18b15c1dcb6e99462eb2
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.9.88:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.9.88:6443"
[discovery] Requesting info from "https://192.168.9.88:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.9.88:6443"
[discovery] Successfully established connection with API Server "192.168.9.88:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "knode1" as an annotation This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.

  

#执行以下命令将节点接入集群
kubeadm join 192.168.92.56:6443 --token 67kq55.8hxoga556caxty7s --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69

#如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建
kubeadm token create --print-join-command

  查看节点的状态

[k8s@kmaster ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 19m v1.13.1
knode1 NotReady <none> 2m5s v1.13.1
knode2 NotReady <none> 2m9s v1.13.1

  稍等处刻

[k8s@kmaster ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 24m v1.13.1
knode1 Ready <none> 7m46s v1.13.1
knode2 Ready <none> 7m50s v1.13.1
[k8s@kmaster ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78d4cf999f-l9f7v / Running 20m 10.244.0.3 kmaster <none> <none>
kube-system coredns-78d4cf999f-n8g4g / Running 20m 10.244.0.2 kmaster <none> <none>
kube-system etcd-kmaster / Running 24m 192.168.9.88 kmaster <none> <none>
kube-system kube-apiserver-kmaster / Running 24m 192.168.9.88 kmaster <none> <none>
kube-system kube-controller-manager-kmaster / Running 24m 192.168.9.88 kmaster <none> <none>
kube-system kube-flannel-ds-amd64-44x4d / Running 8m48s 192.168.9.81 knode1 <none> <none>
kube-system kube-flannel-ds-amd64-465pk / Running 8m51s 192.168.9.82 knode2 <none> <none>
kube-system kube-flannel-ds-amd64-dkb2t / Running 13m 192.168.9.88 kmaster <none> <none>
kube-system kube-proxy-4rgz9 / Running 8m48s 192.168.9.81 knode1 <none> <none>
kube-system kube-proxy-57lvg / Running 25m 192.168.9.88 kmaster <none> <none>
kube-system kube-proxy-hbbqj / Running 8m51s 192.168.9.82 knode2 <none> <none>
kube-system kube-scheduler-kmaster / Running 24m 192.168.9.88 kmaster <none> <none>

Pod调度到Master节点

[k8s@kmaster ~]$ kubectl taint node kmaster node-role.kubernetes.io/master-
node/kmaster untainted
如果要恢复Master Only状态,执行如下命令:
kubectl taint node k8s-master node-role.kubernetes.io/master=""

kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:

[k8s@kmaster ~]$ kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited

之后重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

[k8s@kmaster ~]$ kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-4rgz9" deleted
pod "kube-proxy-57lvg" deleted
pod "kube-proxy-hbbqj" deleted

 
[k8s@kmaster ~]$ kubectl logs kube-proxy-6btv9 -n kube-system
I0125 ::50.004289 server_others.go:] Using ipvs Proxier.
W0125 ::50.004834 proxier.go:] IPVS scheduler not specified, use rr by default
I0125 ::50.004997 server_others.go:] Tearing down inactive rules.
I0125 ::50.052950 server.go:] Version: v1.13.1
I0125 ::50.067533 conntrack.go:] Setting nf_conntrack_max to
I0125 ::50.067821 config.go:] Starting endpoints config controller
I0125 ::50.069525 controller_utils.go:] Waiting for caches to sync for endpoints config controller
I0125 ::50.069231 config.go:] Starting service config controller
I0125 ::50.070363 controller_utils.go:] Waiting for caches to sync for service config controller
I0125 ::50.169786 controller_utils.go:] Caches are synced for endpoints config controller
I0125 ::50.170565 controller_utils.go:] Caches are synced for service config controller

kubenetes 环境的塔建的更多相关文章

  1. PHP 环境塔建与数据类型转换

    手动塔建PHP开发环境 安装php c:\apps\php 安装apache c:\apps\apache 1.配制apache 配制c:\apps\apache\conf\httpd.conf Do ...

  2. XE10 塔建 Android 开发环境 (已测试通过)

    XE10 塔建 Android 开发环境 1. E:\DevCSS\Android\adt-bundle-windows-x86-20131030\adt-bundle-windows-x86-201 ...

  3. 编译spark源码及塔建源码阅读环境

    编译spark源码及塔建源码阅读环境 (一),编译spark源码 1,更换maven的下载镜像: <mirrors> <!-- 阿里云仓库 --> <mirror> ...

  4. Windows下Apache应用环境塔建安全设置(目录权限设置)

    目的:为Apache,php配置受限制的用户权限.保护系统安全.需要的朋友可以参考下. 环境配置情况: apache安装目录:d:\www-s\apache php目录:d:\www-s\php5 m ...

  5. Debian7配置LAMP(Apache/MySQL/PHP)环境及搭建建站

    完整Debian7配置LAMP(Apache/MySQL/PHP)环境及搭建建站 第一.安装和配置Apache Web服务器 运行升级命令来确保我们的系统组件各方面都是最新的. apt-get upd ...

  6. K8s炼气期(一)| minikube安装本地Kubenetes环境

    前言 根据Kubenetes学习路径的七大阶段,炼气期.筑基期.金丹期.元婴期.化神期.炼虚期.大乘期:开始炼气期的第一个小阶段,安装Kubenetes环境. 目录 1.安装kubectl 2.安装m ...

  7. PHP 环境塔建

    快速搭建环境可用软件 每种语言的第一步都是要先搭建环境 WAMP(windows系统下搭建php开发环境): APPSERVER L(Linux)A(Apache)M(Mysql)P(Php)架构 P ...

  8. Vue学习笔记(五)——配置开发环境及初建项目

    前言 在上一篇中,我们通过初步的认识,简单了解 Vue 生命周期的八个阶段,以及可以应用在之后的开发中,针对不同的阶段的钩子采取不同的操作,更好的实现我们的业务代码,处理更加复杂的业务逻辑. 而在这一 ...

  9. cartographer环境建立以及建图测试(详细级)

随机推荐

  1. github 管理代码: code.Aliyun

    阿里云代码管理,,cao,搞了半天,配置百度就可以了,我只想说代码控制可以用github桌面版管理

  2. dl,dt,dd标签的使用

    dl就是定义一个列表 dt说明白了就是这个列表的标题dd就是内容,能缩进和UL,OL性质差不多 <dl> <dt>标题标题</dt> <dd>内容内容& ...

  3. <数据结构基础学习>(三)Part 1 栈

    一.栈 Stack 栈也是一种线性的数据结构 相比数组,栈相对应的操作是数组的子集. 只能从一端添加元素,也只能从一端取出元素.这一端成为栈顶. 1,2,3依次入栈得到的顺序为 3,2,1,栈顶为3, ...

  4. 用beam实现连接kafka和elasticSearch示例 在flink平台运行

    示例实现beam用java编程,监听kafka的testmsg主题,然后将收取到的单词,按5秒做一次统计.结果输出到outputmessage 的kafka主题,同时同步到elasticSearch. ...

  5. Day039--HTML

    HTML小马哥博客 HTML CSS + DIV实现整体布局 1. HTML 超文本标记语言 对换行不敏感 空白折叠现象 标签要严格密封 新建HTML文件,输入 html:5,按tab键后,自动生成的 ...

  6. Ansible安装部署以及常用模块详解

    一.  Ansible 介绍Ansible是一个配置管理系统configuration management system, python 语言是运维人员必须会的语言, ansible 是一个基于py ...

  7. 机器学习 - 正则化L1 L2

    L1 L2 Regularization 表示方式: $L_2\text{ regularization term} = ||\boldsymbol w||_2^2 = {w_1^2 + w_2^2 ...

  8. Spring IOC容器对bean的生命周期进行管理的过程

    1.通过构造器或者工厂方法创建bean的实例 2.为bean的属性设置值和对其他bean的引用 3.将bean的实例传递给bean的后置处理器BeanPostProcessor的postProcess ...

  9. tex中pdf外链

    \documentclass{article} \usepackage{hyperref} \begin{document} \href{run:d:/my folder/test.pdf}{This ...

  10. [物理学与PDEs]第1章第2节 预备知识 2.3 Faraday 电磁感应定律

    1.  Faraday 电磁感应定律: 设 $l$ 为任一闭曲线, 则 $$\bex \oint_l{\bf E}\cdot\rd {\bf l} =-\int_S \cfrac{\p {\bf B} ...