使用 --image-repository 解决kubeadm 安装k8s 集群 谷歌镜像墙的问题
从网上我们看到的好多kubeadm 安装k8s 的时候都说需要下拉取镜像,然后修改,实际上
我们可以使用配置参数,快速的跳过墙的问题
说明:
基础镜像,我们仍然存在,拉取的问题,但是dockerhub 上已经包含了好多拉取好的,我们只需要拉取,统一命名
下就可以了,简单测试我使用了别人以及配置好的 index.docker.io/mirrorgooglecontainers
简单demo
- 准备coredns 镜像
默认上边的仓库没有处理coredns 的镜像,我拉取,本地处理了
docker pull coredns/coredns:1.2.6
docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6
- init
kubeadm init --image-repository index.docker.io/mirrorgooglecontainers
- 效果
kubeadm init --image-repository index.docker.io/mirrorgooglecontainers
I1227 14:33:45.044189 5340 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I1227 14:33:45.044262 5340 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [iz2zeg7uro1snhd9wqmp2oz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 47.93.161.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [iz2zeg7uro1snhd9wqmp2oz localhost] and IPs [47.93.161.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [iz2zeg7uro1snhd9wqmp2oz localhost] and IPs [47.93.161.2 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.001573 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "iz2zeg7uro1snhd9wqmp2oz" as an annotation
[mark-control-plane] Marking the node iz2zeg7uro1snhd9wqmp2oz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node iz2zeg7uro1snhd9wqmp2oz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eopu1d.aygr2do3dfz0zndh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <youserverip>:6443 --token eopu1d.aygr2do3dfz0zndh --discovery-token-ca-cert-hash sha256:c74094ffde73bf834a13a994f6715d2a6fcc165913a54812255d62c90460153b
- 集群组件结果
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
- 部署的pod 组件
因为我没有部署网络组件,所以有几个是有问题的
kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-754c658c4f-kr29k 0/1 Pending 0 6m32s
kube-system pod/coredns-754c658c4f-lthgk 0/1 Pending 0 6m32s
kube-system pod/etcd-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m40s
kube-system pod/kube-apiserver-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m27s
kube-system pod/kube-controller-manager-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m35s
kube-system pod/kube-proxy-kkfrl 1/1 Running 0 6m32s
kube-system pod/kube-scheduler-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m46s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m41s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6m37s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 6m37s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 6m37s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-754c658c4f 2 2 0 6m32s
说明
后边具体的操作可以参考官方文档,我的演示只是部分,主要是说明我们可以使用配置参数解决好多镜像墙的问题
同时还支持其他方便的参数,我们可以使用kubeadm --help 或者官方文档了解更多的信息
新版的配置参数
可以参考
root 5471 1 1 14:33 ? 00:00:07 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=index.docker.io/mirrorgooglecontainers/pause:3.1
root 6016 5982 3 14:33 ? 00:00:17 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=<ip> --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root 6128 5996 1 14:33 ? 00:00:05 etcd --advertise-client-urls=https://<ip>:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://<ip>:2380 --initial-cluster=iz2zeg7uro1snhd9wqmp2oz=https://<ip>:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://<ip>:2379 --listen-peer-urls=https://<ip>2:2380 --name=iz2zeg7uro1snhd9wqmp2oz --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root 6216 6000 0 14:33 ? 00:00:03 kube-scheduler --address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
root 6226 6011 1 14:33 ? 00:00:08 kube-controller-manager --address=127.0.0.1 --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
root 6727 6707 0 14:34 ? 00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=iz2zeg7uro1snhd9wqmp2oz
参考资料
https://kubernetes.io/docs/setup/independent/install-kubeadm/
使用 --image-repository 解决kubeadm 安装k8s 集群 谷歌镜像墙的问题的更多相关文章
- 使用kubeadm安装k8s集群故障处理三则
最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...
- k8s学习笔记之二:使用kubeadm安装k8s集群
一.集群环境信息及安装前准备 部署前操作(集群内所有主机): .关闭防火墙,关闭selinux(生产环境按需关闭或打开) .同步服务器时间,选择公网ntpd服务器或者自建ntpd服务器 .关闭swap ...
- centos7下用kubeadm安装k8s集群并使用ipvs做高可用方案
1.准备 1.1系统配置 在安装之前,需要先做如下准备.三台CentOS主机如下: 配置yum源(使用腾讯云的) 替换之前先备份旧配置 mv /etc/yum.repos.d/CentOS-Base. ...
- kubeadm安装k8s集群
安装kubeadm kubectl kubelet 对于Ubuntu/debian系统,添加阿里云k8s仓库key,非root用户需要加sudo apt-get update && a ...
- Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)
背景: 由于工作发生了一些变动,很长时间没有写博客了. 概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. 一.Centos7 ...
- Kubernetes全栈架构师(Kubeadm高可用安装k8s集群)--学习笔记
目录 k8s高可用架构解析 Kubeadm基本环境配置 Kubeadm系统及内核升级 Kubeadm基本组件安装 Kubeadm高可用组件安装 Kubeadm集群初始化 高可用Master及Token ...
- kubernetes教程第一章-kubeadm高可用安装k8s集群
目录 Kubeadm高可用安装k8s集群 kubeadm高可用安装1.18基本说明 k8s高可用架构解析 kubeadm基本环境配置 kubeadm基本组件安装 kubeadm集群初始化 高可用Mas ...
- CentOS7 使用 kubeadm 搭建 k8s 集群
一 安装Docker-CE 前言 Docker 使用越来越多,安装也很简单,本次记录一下基本的步骤. Docker 目前支持 CentOS 7 及以后的版本,内核要求至少为 3.10. Docker ...
- 使用Kubeadm创建k8s集群之节点部署(三十一)
前言 本篇部署教程将讲述k8s集群的节点(master和工作节点)部署,请先按照上一篇教程完成节点的准备.本篇教程中的操作全部使用脚本完成,并且对于某些情况(比如镜像拉取问题)还提供了多种解决方案.不 ...
随机推荐
- 怎样在Ubuntu 14.04中安装Java(转)
想知道如何在Ubuntu 14.04中安装Java?安装Java肯定是安装Ubuntu 14.04后首先要做的几件事情之一(见http://www.linuxidc.com/Linux/2014-04 ...
- :工厂模式1:方法模式--Pizza
#ifndef __PIZZA_H__ #define __PIZZA_H__ class Pizza { public: Pizza(){} virtual ~Pizza(){} virtual c ...
- SQL-10 获取所有非manager的员工emp_no
题目描述 获取所有非manager的员工emp_noCREATE TABLE `dept_manager` (`dept_no` char(4) NOT NULL,`emp_no` int(11) N ...
- JS日期工具类(转)
javascript Date format(js日期格式化) https://www.cnblogs.com/zhangpengshou/archive/2012/07/19/2599053.htm ...
- CPU的硬件结构和汇编语言
(已更正) 这个问题包括CPU的硬件结构和汇编语言的范畴. 这里梳理一下. 首先, 题主"李建国"自问自答的部分说的是正确的, CPU的指令集是软件与CPU这两个层级之间的接口, ...
- MERGE INTO 解决大数据量 10w 更新缓慢的问题
有个同事处理更新数据缓慢的问题,数据量超10w的量,更新速度太慢耗时较长,然后改成了 MERGE INTO 效率显著提高. 使用方法如下 MERGE INTO 表A USING 表B ON 关联条件 ...
- linux内核工作队列使用总结
我总结出的内核工作队列中的4种用法 1. 使用系统的工作队列(不延迟) 1)定义一个工作: struct work_struct my_work; 2)编写一个函数: void my_work_fun ...
- Android修行之路------ListView自定义布局
主布局 <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android= ...
- MFC 关于new出一个新对话框时,退出对话框内存泄漏的问题解决
问题: 在进行点击按钮弹出对话框时,我是用了new来生成一个新的对话框,但是在新对话框关闭的时候,经过检查发现,新对话框存在内存泄漏问题. 原因: 因为使用了new,但是当时没有找到地方进行delet ...
- nginx的相关配置记录和总结
前言 本文旨在对nginx的各项配置文件和参数做一个记录和总结. 原因是在配置框架和虚拟目录,web语言解析的nginx环境的时候遇到各种问题和参数,有时百度可以解决,有时直接复制粘贴,大都当时有些记 ...