环境:
master,etcd 172.16.1.5
node1 172.16.1.6
node2 172.16.1.7
前提:
1.基于主机名通信,/etc/hosts
2.时间同步
3.关闭firewalld和iptables.services
安装配置步骤:
1.etcd cluster,仅master节点
2.flannel,集群所有节点
3.k8s-master节点
apiserver,scheduler,controlle-manager
4.配置k8s的node节点
先设定docker,kube-proxy,kubelet

kubeadm
1.master和node:安装kubelet,docker,kubeadm
2.master:kubeadm init初始化master节点
3.nodes:kubeadm join
初始化参考地址:
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

[root@node1 ~]#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.5 master.xiaolizi.com master
172.16.1.6 node1.xiaolizi.com node1
172.16.1.7 node2.xiaolizi.com node2

kubernetes镜像源:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
https://mirrors.aliyun.com/kubernetes/apt/doc/yum-key.gpg
docker镜像源:wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce
yum repolist

安装docker,kubeadm,kubectl,kubelet

yum install docker-ce kubeadm kubectl kubelet -y
systemctl enable kubelet

由于k8s安装有很多镜像国内下载不到,因为编辑如下的配置文件可以找到需要的镜像,启动docker前,在Service配置段里定义环境变量,Environment,表示通过这个代理去加载k8s所需的镜像,加载完成后,可以注释掉,仅使用国内的加速器来拉取非k8s的镜像,后续需要使用时,再开启。

# 配置这个代理地址的时候,是根据自己电脑的代理来设置的
vim /usr/lib/systemd/system/docker.service
[Services]
Environment="HTTPS_PROXY=http://192.168.2.208:10080" # 镜像是从国外拉取得,这里写的地址和端口是代理服务的,有些是将事先拉好的镜像推到自己的本地仓库
Environment="HTTP_PROXY=http://192.168.2.208:10080"
Environment="NO_PROXY=127.0.0.0/8,192.168.2.0/25" #保存退出后,执行
systemctl daemon-reload
#确保如下两个参数值为1,默认为1。
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-iptables
#如果结果不是1,需要执行
vim /usr/lib/sysctl.d/-system.conf
bridge-nf-call-iptables =
bridge-nf-call-ip6tables =
sysctl --system
#启动docker-ce
systemctl start docker
#设置开机启动
systemctl enable docker.service
# 启动之前查看,安装了那些文件
[root@master ~]#rpm -ql kubelet
/etc/kubernetes/manifests # 清单目录
/etc/sysconfig/kubelet # 配置文件
/usr/bin/kubelet # 主程序
/usr/lib/systemd/system/kubelet.service # unit file # 早期版本不让启动swap,如果修改的话,在此配置文件定义参数
[root@master ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" # 启动kubelet
systemctl start kubelet # 此时kubelet并未启动成功,master节点还没有初始化完成
systemctl stop kubelet
systemctl enable kubelet

在master节点上使用kubeadm init进行初始化,该命令有很多参数
--apiserver-bind-port # apiserver监听的端口,默认是6443
--apiserver-advertise-address # apiserver监听的地址,默认是0.0.0.0
--cert-dir # 加载证书的相关目录,默认是/etc/kubernetes/pki
--config # kubeadm程序自身的配置文件路径
--ignore-preflight-errors # 预检查时,遇到错误忽略掉,忽略什么自己指定,Example: 'IsPrivilegedUser,Swap'
--kubernetes-version # k8s的版本是什么
--pod-network-cidr # 指定pod所属的网络
--service-cidr

kubeadm init \
--kubernetes-version=v1.15.1 \
--ignore-preflight-errors=Swap \
--pod-network-cidr=10.244.0.0/ \
--service-cidr=10.96.0.0/ [root@master ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 weeks ago 207MB
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d weeks ago .4MB
k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da weeks ago .1MB
k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 weeks ago 159MB
k8s.gcr.io/coredns 1.3. eb516548c180 months ago .3MB
k8s.gcr.io/etcd 3.3. 2c4adeb21b4f months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1

master节点初始化内容

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
[WARNING Hostname]: hostname "master" could not be reached
[WARNING Hostname]: hostname "master": lookup master on 223.5.5.5:: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.0.0.5 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.0.0.5 127.0.0.1 ::]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.5]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.503552 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xfmp2o.rg9vt1jojg8rcb01
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.5: --token xfmp2o.rg9vt1jojg8rcb01 \
--discovery-token-ca-cert-hash sha256:8ce2a857cb3383cb3bf509335de43c78e8d569e091caadd74865e2179d625bbc

master上执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get -help # 查看帮助
kubectl get cs # 查看组件状态信息 componentstatus
kubectl get nodes # 查看节点信息

node上执行

kubeadm join 10.0.0.5: --token xfmp2o.rg9vt1jojg8rcb01 \
--discovery-token-ca-cert-hash sha256:8ce2a857cb3383cb3bf509335de43c78e8d569e091caadd74865e2179d625bbc \
--ignore-preflight-errors=Swap [root@node1 ~]# docker image ls # 出现以下信息,完成
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d weeks ago .4MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 months ago .6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB

安装flannel网络插件
下载地址:
https://github.com/coreos/flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master ~]# docker image ls
# 下面这个镜像拉下来了算是下载完成了
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 months ago .6MB [root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 66m v1.15.1 [root@master ~]# kubectl get pods -n kube-system # 在kube-system这个命名空间下的pod
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-cg2rw / Running 66m
coredns-5c98db65d4-qqd2v / Running 66m
etcd-master / Running 65m
kube-apiserver-master / Running 65m
kube-controller-manager-master / Running 66m
kube-flannel-ds-amd64-wszr5 / Running 2m37s
kube-proxy-xw9gm / Running 66m
kube-scheduler-master / Running 65m [root@master ~]# kubectl get ns # 查看命名空间 namespace
NAME STATUS AGE
default Active 72m
kube-node-lease Active 73m
kube-public Active 73m
kube-system Active 73m

基于kubeamd初始化安装kubernetes集群的更多相关文章

  1. Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)

    背景: 由于工作发生了一些变动,很长时间没有写博客了. 概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. 一.Centos7 ...

  2. VirtualBox上使用kubeadm安装Kubernetes集群

    之前一直使用minikube练习,为了更贴近生产环境,使用VirtualBox搭建Kubernetes集群. 为了不是文章凌乱,把在搭建过程中遇到的问题及解决方法记在了另一篇文章:安装Kubernet ...

  3. CentOS 7.5 使用 yum 安装 Kubernetes 集群(二)

    一.安装方式介绍 1.yum 安装 目前CentOS官方已经把Kubernetes源放入到自己的默认 extras 仓库里面,使用 yum 安装,好处是简单,坏处也很明显,需要官方更新 yum 源才能 ...

  4. 安装Kubernetes集群时遇到的问题及解决方法

    在搭建Kubernetes集群时遇到一些问题,记录在这里. 搭建过程在另一篇文章:VirtualBox上使用kubeadm安装Kubernetes集群 1. 虚拟机安装完CentOS7登录时遇到war ...

  5. 从0到1使用Kubernetes系列(三):使用Ansible安装Kubernetes集群

    前两期的文章介绍了Kubernetes基本概念和架构,用Kubeadm+Ansible搭建Kubernetes集群所需要的工具及其作用.本篇介绍怎么使用Ansible安装Kubernetes集群. 启 ...

  6. 基于Python+Django的Kubernetes集群管理平台

    ➠更多技术干货请戳:听云博客 时至今日,接触kubernetes也有一段时间了,而我们的大部分业务也已经稳定地运行在不同规模的kubernetes集群上,不得不说,无论是从应用部署.迭代,还是从资源调 ...

  7. Centos7上安装Kubernetes集群部署docker

    一.安装前准备1.操作系统详情需要三台主机,都最小化安装 centos7.3,并update到最新 [root@master ~]# (Core) 角色 主机名 IPMaster master 192 ...

  8. 二进制文件方式安装kubernetes集群

    所有操作全部用root使用者进行,高可用一般建议大于等于3台的奇数,我们使用3台master来做高可用 练习环境说明: 参考GitHub master: kube-apiserver,kube-con ...

  9. centos7使用kubeadm安装kubernetes集群

    参考资料:官方文档 一.虚拟机安装 配置说明: windows下使用vbox,centos17.6 min版,kubernetes的版本是1.14.1, 安装如下三台机器: 192.168.56.15 ...

随机推荐

  1. iOS - 使用SDWebImage缓存图片,MJPhotoBrowser展示图片的问题

    需求:在项目中,使用WKWebView加载html的富文本,只点击图片的时候展示图片,其他的不显示 问题:第一次点击用SDWebImage,不加载网络图片,以后再点击可以正常显示图片,SDWebIma ...

  2. vue项目.eslintrc格式化

    场景:.eslintrc非常的严谨,但是严格总是好的,能写出好的代码.如何格式化呢?写好的代码 如何一键 变成符合.eslintrc规范的代码呢??? 比如 双引号变单引号    去掉分号等等. 解决 ...

  3. 验证Prometheus alertmanager邮件发送

    新环境上配置alertmanager时出现了“Client was not authenticated to send anonymous mail during MAIL FROM”错误,但老环境上 ...

  4. tkinter中Partial Function Example

    from functools import partial as pto from tkinter import Tk, Button, X from tkinter.messagebox impor ...

  5. libevent源码分析三--signal事件响应

    libevent支持io事件,timeout事件,signal事件,这篇文件将分析libevent是如何组织signal事件,以及如何实现signal事件响应的. 1.  sigmap 类似于io事件 ...

  6. go开发环境

    1.go 下载地址 https://studygolang.com/dl 根据操作系统 下载相应的安装包 2.设置环境变量 goroot gopath path 增加%goroot%\bin 3.开发 ...

  7. timeout超时时长优化和hystrix dashboard可视化分布式系统

    在生产环境中部署一个短路器,一开始需要将一些关键配置设置的大一些,比如timeout超时时长,线程池大小,或信号量容量 然后逐渐优化这些配置,直到在一个生产系统中运作良好 (1)一开始先不要设置tim ...

  8. nginx+lua+storm的热点缓存的流量分发策略自动降级

    1.在storm中,实时的计算出瞬间出现的热点. 某个storm task,上面算出了1万个商品的访问次数,LRUMap 频率高一些,每隔5秒,去遍历一次LRUMap,将其中的访问次数进行排序,统计出 ...

  9. Idea中类实现Serializable接口 引入 serialVersionUID

    idea实现Serializable接口,然后打出serialVersionUID的办法 setting>editor>Inspection>Java>Serializatio ...

  10. ELK学习笔记之logstash将配置写在多个文件

    0x00 概述 我们用Logsatsh写配置文件的时候,如果读取的文件太多,匹配的正则过多,会使配置文件动辄成百上千行代码,可能会造成阅读和修改困难.这时候,我们可以将配置文件的输入.过滤.输出分别放 ...