测试环境

主机 系统
master CentOS 7.3
node CentOS 7.3

2.关闭selinux(所有节点都执行)

  1. [root@matser ~]# getenforce
  2. Disabled

3.关闭swap分区(所有节点都执行)

  1. [root@matser ~]# swapoff -a
  2. [root@matser ~]# free -h
  3. total used free shared buff/cache available
  4. Mem: 1.8G 502M 117M 2.0M 1.2G 1.1G
  5. Swap: 0B 0B 0B

4.设置sshd   keepalive(所有节点都执行)

  1. echo "ClientAliveInterval 10" >> /etc/ssh/sshd_config
  2. echo "TCPKeepAlive yes" >> /etc/ssh/sshd_config
  3. systemctl restart sshd.service

5.安装docker(所有节点都执行)

  1. yum install -y docker
  2. systemctl enable docker && systemctl start docker

6.设置路由(所有节点都执行)

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.bridge.bridge-nf-call-ip6tables =
  3. net.bridge.bridge-nf-call-iptables =
  4. EOF
  5. sysctl --system

7.安装kubeadmkubeletkubectl(所有节点都执行)

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
  5. enabled=
  6. gpgcheck=
  7. repo_gpgcheck=
  8. gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
  9. EOF
  10.  
  11. yum install -y kubelet kubeadm kubectl
  12. systemctl enable kubelet && systemctl start kubelet

8.初始化指定kubernetes版本,并设置一下pod-network-cidr(master)

  1. kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
  2.  

[root@matser ~]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "matser" could not be reached
[WARNING Hostname]: hostname "matser" lookup matser on 100.100.2.136:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [matser kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.1.1]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 50.001758 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node matser as master by adding a label and a taint
[markmaster] Master matser tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: e03777.05d943f3d7c05ff1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

  1.  

Your Kubernetes master has initialized successfully!

  1.  

To start using your cluster, you need to run the following as a regular user:

  1.  

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1.  

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

  1.  

You can now join any number of machines by running the following on each node
as root:

  1.  

kubeadm join --token e03777.05d943f3d7c05ff1 172.31.1.1:6443 --discovery-token-ca-cert-hash sha256:40abf04eaea9097377b2b3def894a4a2540a353ac76bc918ca6c18549193f45c

  1.  

8.设置变量(master)

  1. export KUBECONFIG=/etc/kubernetes/admin.conf
  2. echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

9.配置网络Flannel(master)

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
  2.  
  3. ##查看
  4. [root@matser ~]# kubectl get pods --all-namespaces
  5. NAMESPACE NAME READY STATUS RESTARTS AGE
  6. default nginx / Running 15s
  7. kube-system etcd-matser / Running 4m
  8. kube-system kube-apiserver-matser / Running 3m
  9. kube-system kube-controller-manager-matser / Running 4m
  10. kube-system kube-dns-6f4fd4bdf-lrt5x / Running 4m
  11. kube-system kube-flannel-ds-9rfcs / Running 3m
  12. kube-system kube-flannel-ds-fh2pw / Running 1m
  13. kube-system kube-proxy-czxhz / Running 1m
  14. kube-system kube-proxy-t9php / Running 4m
  15. kube-system kube-scheduler-matser / Running 3m

10.加入nodes(node端)

在node端执行

  1. [root@node ~]# kubeadm join --token e03777.05d943f3d7c05ff1 172.31.1.1: --discovery-token-ca-cert-hash sha256:40abf04eaea9097377b2b3def894a4a2540a353ac76bc918ca6c18549193f45c
  2. [preflight] Running pre-flight checks.
  3. [WARNING FileExisting-crictl]: crictl not found in system path
  4. [discovery] Trying to connect to API Server "172.31.1.1:6443"
  5. [discovery] Created cluster-info discovery client, requesting info from "https://172.31.1.1:6443"
  6. [discovery] Requesting info from "https://172.31.1.1:6443" again to validate TLS against the pinned public key
  7. [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.31.1.1:6443"
  8. [discovery] Successfully established connection with API Server "172.31.1.1:6443"
  9.  
  10. This node has joined the cluster:
  11. * Certificate signing request was sent to master and a response
  12. was received.
  13. * The Kubelet was informed of the new secure connection details.
  14.  
  15. Run 'kubectl get nodes' on the master to see this node join the cluster.

11.master端查看

  1. [root@matser ~]# kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. matser Ready master 23m v1.9.4
  4. node Ready <none> 19m v1.9.4

k8s 1.9 安装的更多相关文章

  1. K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 前言 网上有很多关于k8s安装的文章,但是我参照一些文章安装时碰到了不少坑.今天终于安装好了,故将一些关键点写下来与大家共享. 我安装是基 ...

  2. kubernetes(K8S)快速安装与配置集群搭建图文教程

    kubernetes(K8S)快速安装与配置集群搭建图文教程 作者: admin 分类: K8S 发布时间: 2018-09-16 12:20 Kubernetes是什么? 首先,它是一个全新的基于容 ...

  3. [转帖]K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 http://www.cnblogs.com/jieky/p/10679998.html 原作者写的比较简单 大略流程和跳转的多一些 改天 ...

  4. K8S集群安装部署

    K8S集群安装部署   参考地址:https://www.cnblogs.com/xkops/p/6169034.html 1. 确保系统已经安装epel-release源 # yum -y inst ...

  5. k8s big-ip control 安装使用

    k8s big-ip control 安装使用 0. 准备工作 网络打通,这里没有使用fannel,没有使用vxlan . 在f5界面 创建f5分区.这里是cce-test. 1. 安装bigip c ...

  6. k8s dns 服务安装配置说明

    1. 提前条件 安装k8s 集群 2.  dns  安装配置 安装方式: 使用controller  service controller  脚本: 基于官方改动 apiVersion: v1 kin ...

  7. install kubernetes cluster k8s集群安装

    一,安装docker-ce 17.031,下载rpm包 Wget -P /tmp https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/ ...

  8. kubernetes实战(十):k8s使用Helm安装harbor

    1.基本概念 对于复杂的应用中间件,需要设置镜像运行的需求.环境变量,并且需要定制存储.网络等设置,最后设计和编写Deployment.Configmap.Service及Ingress等相关yaml ...

  9. K8s集群安装和检查(经验分享)

    一.组件方式检查 1. Master节点: root>> kubectl get cs 2. Node 节点: 无  二.服务方式检查 1. Master 节点: root>> ...

随机推荐

  1. Python基础(7)——迭代器&生成器

    1.列表生成式 [i*2 for i in range(10)] [fun(i) for i in range(10)] 2.生成器 # Author Qian Chenglong #列表生成器 a= ...

  2. ESP32 ADC

    2个12位的ADC,共计18通道,ADC2比较特殊的一点就是:ADC2和wifi共用,wifi的优先级更高,所以ADC2只有在WIFI模块不用的情况下好使: 在读取ADC数据之前,必须先对ADC进行设 ...

  3. rc/rs的区别 -- 多层次分析

    1.rs是rc的升级版本,rs一般会结合deployment 2.rc的pod如果要配置镜像等内容,要修改后删除原来的rc再创建  命令式,影响业务比较大 3.rs一般配合deployment,可以动 ...

  4. GIT 分支管理:分支管理策略、Bug分支、Feature分支

    通常,合并分支时,如果可能,Git会用Fast forward模式,但这种模式下,删除分支后,会丢掉分支信息. 如果要强制禁用Fast forward模式,Git就会在merge时生成一个新的comm ...

  5. GIT 管理修改、删除文件

    管理修改 现在,假定你已经完全掌握了暂存区的概念.下面,我们要讨论的就是,为什么Git比其他版本控制系统设计得优秀,因为Git跟踪并管理的是修改,而非文件. 你会问,什么是修改?比如你新增了一行,这就 ...

  6. curl发送json格式数据

    php的curl方法详细的见官方手册. curl_setopt用法:  http://www.php.net/manual/en/function.curl-setopt.php <?php $ ...

  7. tcp为什么是三次握手

    刷知乎看到的,很可爱啊哈哈哈就顺手黏贴过来了 作者:大闲人柴毛毛链接:https://www.zhihu.com/question/24853633/answer/254224088来源:知乎著作权归 ...

  8. odooERP系统(框架)总结

    1:Odoo 是一个现代化的商业应用套件,使用 AGPL 许可证,并具有客户关系管理(CRM),人力资源,销售,采购,会计,制造,仓库管理,项目管理,以及众多社区模块. 2:它是基于一个模块化,可扩展 ...

  9. Luogu P1966 火柴排队

    这还是一道比较简单的题目,稍微想一下就可以解决.终于有NOIP难度的题目了 首先我们看那个∑(ai-bi)^2的式子,发现这个的最小值就是排序不等式 所以我们只需要改变第一组火柴的顺序,使它和第二组火 ...

  10. Html页面雪花效果的实现

    简单介绍 昨天修改了一下博客所用的模板,冬天来了,给自己的博客加点雪花,感觉更有意境. 百度找到了非常多的结果,最终还是选用了cfs.snow.js,很赞压缩之后只有1kb左右,而且不会影响页面使用, ...