测试环境

主机 系统
master CentOS 7.3
node CentOS 7.3

2.关闭selinux(所有节点都执行)

[root@matser ~]# getenforce
Disabled

3.关闭swap分区(所有节点都执行)

[root@matser ~]# swapoff -a
[root@matser ~]# free -h
total used free shared buff/cache available
Mem: 1.8G 502M 117M 2.0M 1.2G 1.1G
Swap: 0B 0B 0B

4.设置sshd   keepalive(所有节点都执行)

echo "ClientAliveInterval 10" >> /etc/ssh/sshd_config
echo "TCPKeepAlive yes" >> /etc/ssh/sshd_config
systemctl restart sshd.service

5.安装docker(所有节点都执行)

 yum install -y docker
systemctl enable docker && systemctl start docker

6.设置路由(所有节点都执行)

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables =
EOF
sysctl --system

7.安装kubeadmkubeletkubectl(所有节点都执行)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

8.初始化指定kubernetes版本,并设置一下pod-network-cidr(master)

 kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16

[root@matser ~]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "matser" could not be reached
[WARNING Hostname]: hostname "matser" lookup matser on 100.100.2.136:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [matser kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.1.1]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 50.001758 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node matser as master by adding a label and a taint
[markmaster] Master matser tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: e03777.05d943f3d7c05ff1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy


Your Kubernetes master has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/


You can now join any number of machines by running the following on each node
as root:


kubeadm join --token e03777.05d943f3d7c05ff1 172.31.1.1:6443 --discovery-token-ca-cert-hash sha256:40abf04eaea9097377b2b3def894a4a2540a353ac76bc918ca6c18549193f45c

 

8.设置变量(master)

 export KUBECONFIG=/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

9.配置网络Flannel(master)

 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

##查看
[root@matser ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx / Running 15s
kube-system etcd-matser / Running 4m
kube-system kube-apiserver-matser / Running 3m
kube-system kube-controller-manager-matser / Running 4m
kube-system kube-dns-6f4fd4bdf-lrt5x / Running 4m
kube-system kube-flannel-ds-9rfcs / Running 3m
kube-system kube-flannel-ds-fh2pw / Running 1m
kube-system kube-proxy-czxhz / Running 1m
kube-system kube-proxy-t9php / Running 4m
kube-system kube-scheduler-matser / Running 3m

10.加入nodes(node端)

在node端执行

[root@node ~]#  kubeadm join --token e03777.05d943f3d7c05ff1 172.31.1.1: --discovery-token-ca-cert-hash sha256:40abf04eaea9097377b2b3def894a4a2540a353ac76bc918ca6c18549193f45c
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "172.31.1.1:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.31.1.1:6443"
[discovery] Requesting info from "https://172.31.1.1:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.31.1.1:6443"
[discovery] Successfully established connection with API Server "172.31.1.1:6443" This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.

11.master端查看

[root@matser ~]# kubectl  get node
NAME STATUS ROLES AGE VERSION
matser Ready master 23m v1.9.4
node Ready <none> 19m v1.9.4

k8s 1.9 安装的更多相关文章

  1. K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 前言 网上有很多关于k8s安装的文章,但是我参照一些文章安装时碰到了不少坑.今天终于安装好了,故将一些关键点写下来与大家共享. 我安装是基 ...

  2. kubernetes(K8S)快速安装与配置集群搭建图文教程

    kubernetes(K8S)快速安装与配置集群搭建图文教程 作者: admin 分类: K8S 发布时间: 2018-09-16 12:20 Kubernetes是什么? 首先,它是一个全新的基于容 ...

  3. [转帖]K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 http://www.cnblogs.com/jieky/p/10679998.html 原作者写的比较简单 大略流程和跳转的多一些 改天 ...

  4. K8S集群安装部署

    K8S集群安装部署   参考地址:https://www.cnblogs.com/xkops/p/6169034.html 1. 确保系统已经安装epel-release源 # yum -y inst ...

  5. k8s big-ip control 安装使用

    k8s big-ip control 安装使用 0. 准备工作 网络打通,这里没有使用fannel,没有使用vxlan . 在f5界面 创建f5分区.这里是cce-test. 1. 安装bigip c ...

  6. k8s dns 服务安装配置说明

    1. 提前条件 安装k8s 集群 2.  dns  安装配置 安装方式: 使用controller  service controller  脚本: 基于官方改动 apiVersion: v1 kin ...

  7. install kubernetes cluster k8s集群安装

    一,安装docker-ce 17.031,下载rpm包 Wget -P /tmp https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/ ...

  8. kubernetes实战(十):k8s使用Helm安装harbor

    1.基本概念 对于复杂的应用中间件,需要设置镜像运行的需求.环境变量,并且需要定制存储.网络等设置,最后设计和编写Deployment.Configmap.Service及Ingress等相关yaml ...

  9. K8s集群安装和检查(经验分享)

    一.组件方式检查 1. Master节点: root>> kubectl get cs 2. Node 节点: 无  二.服务方式检查 1. Master 节点: root>> ...

随机推荐

  1. 安装webpack和webpack打包(此文转自Henery)

    Henery博客地址为:http://blog.csdn.net/henery_002 写的很详细,可以做参考 最近要做项目优化了,尤其是前端这块,许多js需要模块化管理和相应的优化 1.输入如下地址 ...

  2. Matlab使用技巧

    (1) Matlab强制退出正在运行的程序A: Ctrl + C(2)如何让Matlab跑完程序后自动关机?A: 在程序的末尾加上一条代码:    system('shutdown -s')   当然 ...

  3. AWS 为 Elasticsearch 推出开源发行版

    WS 近日宣布为 Elasticsearch 推出开源发行版 Open Distro for Elasticsearch. Elasticsearch 是一个分布式.面向文档的搜索和分析引擎,它支持结 ...

  4. webpack4对第三方库css,项目全局css和vue内联css文件提取到单独的文件(二十二)

    在讲解提取css之前,我们先看下项目的架构如下结构: ### 目录结构如下: demo1 # 工程名 | |--- dist # 打包后生成的目录文件 | |--- node_modules # 所有 ...

  5. PAT A1144 The Missing Number (20 分)——set

    Given N integers, you are supposed to find the smallest positive integer that is NOT in the given li ...

  6. ubuntu RPLIDAR A2的使用

    RPLIDAR是由RoboPeak Team,SlamTec公司开发的低成本2D LIDAR解决方案.它可以扫描6度半径内的360°环境. RPLIDAR的输出非常适合构建地图,做slam或构建3D模 ...

  7. linux调度器源码分析 - 初始化(二)

    本文为原创,转载请注明:http://www.cnblogs.com/tolimit/ 引言 上期文章linux调度器源码分析 - 概述(一)已经把调度器相关的数据结构介绍了一遍,本篇着重通过代码说明 ...

  8. 如何修改Oracle服务IP地址

    oracle数据库所在的机器更改IP地址后,发现无法连接,解决这个问题,需要修改一下对应的文件: F:\app\zhaohe\product\11.2.0\dbhome_1\NETWORK\ADMIN ...

  9. [06] JSTL标准标签库

    1.JSTL概述 之前在<[03-01] JSP自定义标签>中已经说明了自定义标签的概况,而JSTL也是一套标签库,不过是厂商已经定义好的标签库,我们不再需要自行进行定制,直接使用即可. ...

  10. Ubuntu16.04中搭建TFTP 和 NFS 服务器

    Ubuntu 16.04中搭建TFTP服务 1. 安装 $ apt-get install tftp-hpa tftpd-hpa   2. 建立目录 $ mkdir /tftpboot # 这是建立t ...