从网上我们看到的好多kubeadm 安装k8s 的时候都说需要下拉取镜像,然后修改,实际上
我们可以使用配置参数,快速的跳过墙的问题
说明:
基础镜像,我们仍然存在,拉取的问题,但是dockerhub 上已经包含了好多拉取好的,我们只需要拉取,统一命名
下就可以了,简单测试我使用了别人以及配置好的 index.docker.io/mirrorgooglecontainers

简单demo

  • 准备coredns 镜像
 
默认上边的仓库没有处理coredns 的镜像,我拉取,本地处理了
docker pull coredns/coredns:1.2.6
docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6
 
  • init
kubeadm init --image-repository index.docker.io/mirrorgooglecontainers
  • 效果
kubeadm init --image-repository index.docker.io/mirrorgooglecontainers
I1227 14:33:45.044189 5340 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I1227 14:33:45.044262 5340 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [iz2zeg7uro1snhd9wqmp2oz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 47.93.161.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [iz2zeg7uro1snhd9wqmp2oz localhost] and IPs [47.93.161.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [iz2zeg7uro1snhd9wqmp2oz localhost] and IPs [47.93.161.2 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.001573 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "iz2zeg7uro1snhd9wqmp2oz" as an annotation
[mark-control-plane] Marking the node iz2zeg7uro1snhd9wqmp2oz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node iz2zeg7uro1snhd9wqmp2oz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eopu1d.aygr2do3dfz0zndh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join <youserverip>:6443 --token eopu1d.aygr2do3dfz0zndh --discovery-token-ca-cert-hash sha256:c74094ffde73bf834a13a994f6715d2a6fcc165913a54812255d62c90460153b
 
 
  • 集群组件结果
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
 
  • 部署的pod 组件
    因为我没有部署网络组件,所以有几个是有问题的
 
kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-754c658c4f-kr29k 0/1 Pending 0 6m32s
kube-system pod/coredns-754c658c4f-lthgk 0/1 Pending 0 6m32s
kube-system pod/etcd-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m40s
kube-system pod/kube-apiserver-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m27s
kube-system pod/kube-controller-manager-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m35s
kube-system pod/kube-proxy-kkfrl 1/1 Running 0 6m32s
kube-system pod/kube-scheduler-iz2zeg7uro1snhd9wqmp2oz 1/1 Running 0 5m46s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m41s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6m37s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 6m37s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 6m37s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-754c658c4f 2 2 0 6m32s
 
 

说明

后边具体的操作可以参考官方文档,我的演示只是部分,主要是说明我们可以使用配置参数解决好多镜像墙的问题
同时还支持其他方便的参数,我们可以使用kubeadm --help 或者官方文档了解更多的信息

新版的配置参数

可以参考

 
root 5471 1 1 14:33 ? 00:00:07 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=index.docker.io/mirrorgooglecontainers/pause:3.1
root 6016 5982 3 14:33 ? 00:00:17 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=<ip> --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root 6128 5996 1 14:33 ? 00:00:05 etcd --advertise-client-urls=https://<ip>:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://<ip>:2380 --initial-cluster=iz2zeg7uro1snhd9wqmp2oz=https://<ip>:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://<ip>:2379 --listen-peer-urls=https://<ip>2:2380 --name=iz2zeg7uro1snhd9wqmp2oz --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root 6216 6000 0 14:33 ? 00:00:03 kube-scheduler --address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
root 6226 6011 1 14:33 ? 00:00:08 kube-controller-manager --address=127.0.0.1 --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
root 6727 6707 0 14:34 ? 00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=iz2zeg7uro1snhd9wqmp2oz
 

参考资料

https://kubernetes.io/docs/setup/independent/install-kubeadm/

使用 --image-repository 解决kubeadm 安装k8s 集群 谷歌镜像墙的问题的更多相关文章

  1. 使用kubeadm安装k8s集群故障处理三则

    最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...

  2. k8s学习笔记之二:使用kubeadm安装k8s集群

    一.集群环境信息及安装前准备 部署前操作(集群内所有主机): .关闭防火墙,关闭selinux(生产环境按需关闭或打开) .同步服务器时间,选择公网ntpd服务器或者自建ntpd服务器 .关闭swap ...

  3. centos7下用kubeadm安装k8s集群并使用ipvs做高可用方案

    1.准备 1.1系统配置 在安装之前,需要先做如下准备.三台CentOS主机如下: 配置yum源(使用腾讯云的) 替换之前先备份旧配置 mv /etc/yum.repos.d/CentOS-Base. ...

  4. kubeadm安装k8s集群

    安装kubeadm kubectl kubelet 对于Ubuntu/debian系统,添加阿里云k8s仓库key,非root用户需要加sudo apt-get update && a ...

  5. Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)

    背景: 由于工作发生了一些变动,很长时间没有写博客了. 概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. 一.Centos7 ...

  6. Kubernetes全栈架构师(Kubeadm高可用安装k8s集群)--学习笔记

    目录 k8s高可用架构解析 Kubeadm基本环境配置 Kubeadm系统及内核升级 Kubeadm基本组件安装 Kubeadm高可用组件安装 Kubeadm集群初始化 高可用Master及Token ...

  7. kubernetes教程第一章-kubeadm高可用安装k8s集群

    目录 Kubeadm高可用安装k8s集群 kubeadm高可用安装1.18基本说明 k8s高可用架构解析 kubeadm基本环境配置 kubeadm基本组件安装 kubeadm集群初始化 高可用Mas ...

  8. CentOS7 使用 kubeadm 搭建 k8s 集群

    一 安装Docker-CE 前言 Docker 使用越来越多,安装也很简单,本次记录一下基本的步骤. Docker 目前支持 CentOS 7 及以后的版本,内核要求至少为 3.10. Docker ...

  9. 使用Kubeadm创建k8s集群之节点部署(三十一)

    前言 本篇部署教程将讲述k8s集群的节点(master和工作节点)部署,请先按照上一篇教程完成节点的准备.本篇教程中的操作全部使用脚本完成,并且对于某些情况(比如镜像拉取问题)还提供了多种解决方案.不 ...

随机推荐

  1. 前端基础之JavaScript进阶

    一.流程控制 if - else var a = 10; if (a >5){ console.log("yes"); }else { console.log("n ...

  2. Cracking The Coding Interview 1.6

    //原文: // // Given an image represented by an NxN matrix, where each pixel in the image is 4 bytes, w ...

  3. doctype和Quirks模式

    doctype: 告诉浏览器使用什么模式去渲染页面,可能会影响页面的css渲染和js代码的执行. DTD :为了兼容旧的浏览器渲染方式,将DTD作为参数告诉浏览器使用什么模式渲染页面.始于IE6; 1 ...

  4. SQL-10 获取所有非manager的员工emp_no

    题目描述 获取所有非manager的员工emp_noCREATE TABLE `dept_manager` (`dept_no` char(4) NOT NULL,`emp_no` int(11) N ...

  5. SQL-4查找所有已经分配部门的员工的last_name和first_name(自然连接)

    题目描述 查找所有已经分配部门的员工的last_name和first_nameCREATE TABLE `dept_emp` (`emp_no` int(11) NOT NULL,`dept_no` ...

  6. [PyImageSearch] Ubuntu16.04 使用OpenCV和python识别信用卡 OCR

    在今天的博文中,我将演示如何使用模板匹配作为OCR的一种形式来帮助我们创建一个自动识别信用卡并从图像中提取相关信用卡数位的解决方案. 今天的博文分为三部分. 在第一部分中,我们将讨论OCR-A字体,这 ...

  7. SharePoint REST API - OData查询操作

    博客地址:http://blog.csdn.net/FoxDave 本篇主要讲述SharePoint REST中OData的查询操作.SharePoint REST服务支持很多OData查询字符串 ...

  8. 四、使用汇编编写LED裸机驱动

    1. 确定硬件连接 打开OK6410底板电路图,找到LED,可以发现NLEDx为0时LED点亮. 找到LED的控制引脚,发现LED控制引脚通过连接器连到了核心板: 打开核心板电路图,找到对应的连接器中 ...

  9. iOS 获取当前正在显示的ViewController

    //获取当前屏幕显示的viewcontroller - (UIViewController *)getCurrentVC { UIViewController *result = nil; UIWin ...

  10. c++下基于windows socket的多线程服务器(基于TCP协议)

    之前用c++实现过基于windows socket的单线程TCP服务器(http://www.cnblogs.com/jzincnblogs/p/5170230.html),今天实现了一个多线程的版本 ...