安装Docker

参考:https://www.cnblogs.com/rdchenxi/p/10381631.html

加速器配置

参考:https://www.cnblogs.com/rdchenxi/p/10399885.html

网络介绍k8s(CNI网络模型)

Flannel网络

overlay

覆盖网络就是应用层网络,它是面向应用层的,不考虑或很少考虑网络层,物理层的问题。

详细说来,覆盖网络是指建立在另一个网络上的网络。该网络中的结点可以看作通过虚拟或逻辑链路而连接起来的。虽然在底层有很多条物理链路,但是这些虚拟或逻辑链路都与路径一一对应。例如:许多P2P网络就是覆盖网络,因为它运行在互连网的上层。覆盖网络允许对没有IP地址标识的目的主机路由信息,例如:Freenet 和DHT(分布式哈希表)可以路由信息到一个存储特定文件的结点,而这个结点的IP地址事先并不知道。

覆盖网络被认为是一条用来改善互连网路由的途径,让二层网络在三层网络中传递,既解决了二层的缺点,又解决了三层的不灵活!

FIannel

Flannel实质上是一种“覆盖网络(overlay network)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VxLAN、AWS VPC和GCE路由等数据转发方式。

默认的节点间数据通信方式是UDP转发。

安装Flannel

分配子网段写入edcd里

[root@mast-1 k8s]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.11:2379,ht
tps://192.168.10.12:2379,https://192.168.10.13:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@mast-1 k8s]#

查看数据

[root@mast-1 k8s]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.11:2379,ht
tps:////192.168.10.12:2379,https://192.168.10.13:2379" get /coreos.com/network/config { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

下载安装Flannel

[root@node-1 ~]# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
--2019-04-20 09:38:45-- https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
正在解析主机 github.com (github.com)... 13.250.177.223, 52.74.223.119, 13.229.188.59
正在连接 github.com (github.com)|13.250.177.223|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 302 Found
位置:https://github-production-release-asset-2e65be.s3.amazonaws.com/21704134/596e76e2-002c-11e8-9359-36689058e7af?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20
190420%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190420T013853Z&X-Amz-Expires=300&X-Amz-Signature=9c7a12bd05f366c722480fd53b3968d2a3b6ed6f690baab3a24ef7b1955e2d11&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dflannel-v0.10.0-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [跟随至新的 URL]--2019-04-20 09:38:53-- https://github-production-release-asset-2e65be.s3.amazonaws.com/21704134/596e76e2-002c-11e8-9359-36689058e7af?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIW
NJYAX4CSVEH53A%2F20190420%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190420T013853Z&X-Amz-Expires=300&X-Amz-Signature=9c7a12bd05f366c722480fd53b3968d2a3b6ed6f690baab3a24ef7b1955e2d11&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dflannel-v0.10.0-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream正在解析主机 github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.139.211
正在连接 github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.139.211|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:9706487 (9.3M) [application/octet-stream]
正在保存至: “flannel-v0.10.0-linux-amd64.tar.gz” 100%[=====================================================================================================================================================>] 9,706,487 15.6KB/s 用时 7m 23s 2019-04-20 09:46:19 (21.4 KB/s) - 已保存 “flannel-v0.10.0-linux-amd64.tar.gz” [9706487/9706487])

  node-1安装

[root@node-1 ~]# mkdir /opt/kubernetes/{bin,cfg} -pv
mkdir: 已创建目录 "/opt/kubernetes"
mkdir: 已创建目录 "/opt/kubernetes/bin"
mkdir: 已创建目录 "/opt/kubernetes/cfg"
[root@node-1 ~]# tar xf flannel-v0.10.0-linux-amd64.tar.gz -C /opt/kubernetes/bin/
[root@node-1 ~]# cat flannel.sh
#!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service [Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure [Install]
WantedBy=multi-user.target EOF cat <<EOF >/usr/lib/systemd/system/docker.service [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env 读取生成的子网
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s [Install]
WantedBy=multi-user.target EOF systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker
[root@node-1 ~]# bash flannel.sh "https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379"
[root@node-1 ~]# cat /opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -e
tcd-keyfile=/opt/etcd/ssl/server-key.pem"
[root@node-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:f7:91:47 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.13/24 brd 192.168.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::6017:43d:a11c:2a9f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:19:5d:ee:63 brd ff:ff:ff:ff:ff:ff
inet 172.17.8.1/24 brd 172.17.8.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 56:2f:96:00:5c:05 brd ff:ff:ff:ff:ff:ff
inet 172.17.8.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::542f:96ff:fe00:5c05/64 scope link
valid_lft forever preferred_lft forever

  node-2安装

[root@node-1 ~]# scp -r /usr/lib/systemd/system/docker.service 192.168.10.14:/usr/lib/systemd/system
root@192.168.10.14's password:
docker.service 100% 526 236.7KB/s 00:00
[root@node-1 ~]# scp -r /usr/lib/systemd/system/flanneld.service 192.168.10.14:/usr/lib/systemd/system
root@192.168.10.14's password:
flanneld.service 100% 417 178.3KB/s 00:00
[root@node-1 ~]# scp -r /opt/kubernetes 192.168.10.14:/opt/
root@192.168.10.14's password:
Permission denied, please try again.
root@192.168.10.14's password:
flanneld 100% 35MB 11.5MB/s 00:03
mk-docker-opts.sh 100% 2139 40.6KB/s 00:00
README.md 100% 4298 109.4KB/s 00:00
flanneld 100% 235 55.1KB/s 00:00
[root@node-2 ~]# mkdir /opt/etcd node-2创建目录 [root@node-1 ~]# scp -r /opt/etcd/ssl 192.168.10.14:/opt/etcd/
root@192.168.10.14's password:
ca.pem 100% 1265 70.7KB/s 00:00
server-key.pem 100% 1675 79.2KB/s 00:00
server.pem
node-2启动 100% 1338 39.5KB/s 00:00
[root@node-2 ~]# systemctl daemon-reload
[root@node-2 ~]# systemctl restart flanneld
[root@node-2 ~]# systemctl restart docker
[root@node-2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e9:c2:41 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.14/24 brd 192.168.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::85fd:b3b3:c97:eca3/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:28:a8:bb:18 brd ff:ff:ff:ff:ff:ff
inet 172.17.82.1/24 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 42:02:5f:e8:9d:d8 brd ff:ff:ff:ff:ff:ff
inet 172.17.82.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::4002:5fff:fee8:9dd8/64 scope link
valid_lft forever preferred_lft forever

  添加路由,容器互通;注意正常应该是Flannel自己添加路由的,可能因为我没装route工具原因吧

[root@node-1 ~]# route add -net 172.17.82.0/24 gw 192.168.10.14   node-1添加的路由
[root@node-2 ~]# route add -net 172.17.8.0/24 gw 192.168.10.13 node-2 路由
[root@node-1 ~]# docker run -it busybox sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:ac:11:08:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.8.2/24 brd 172.17.8.255 scope global eth0
valid_lft forever preferred_lft forever
node-2容器
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:52:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.82.2/24 brd 172.17.82.255 scope global eth0
valid/ # ping 172.17.8.2
PING 172.17.8.2 (172.17.8.2): 56 data bytes
64 bytes from 172.17.8.2: seq=3283 ttl=62 time=0.944 ms
64 bytes from 172.17.8.2: seq=3284 ttl=62 time=0.950 ms
64 bytes from 172.17.8.2: seq=3285 ttl=62 time=0.712 ms

  查看生产网络配置

[root@node-1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" ls /coreos.com/network
/coreos.com/network/config
/coreos.com/network/subnets
[root@node-1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.17.8.0-24
/coreos.com/network/subnets/172.17.82.0-24

  查看etcd里网络设置

[root@node-1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" get /coreos.com/network/subnets/172.17.8.0-24

{"PublicIP":"192.168.10.13","BackendType":"vxlan","BackendData": {"VtepMAC":"56:2f:96:00:5c:05"}}
[root@node-1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" get /coreos.com/network/subnets/172.17.82.0-24 {"PublicIP":"192.168.10.14","BackendType":"vxlan","BackendData":{"VtepMAC":"42:02:5f:e8:9d:d8"}}

  

  

  

k8s集群之Docker安装镜像加速器配置与k8s容器网络的更多相关文章

  1. NVIDIA-GPU归入K8S集群管理的安装文档--第二版

    一,nvidia K80驱动安装 1,  查看服务器上的Nvidia(英伟达)显卡信息,命令lspci |grep NVIDIA 2,  按下来,进行显卡驱动程序的安装,驱动程序可到nvidia的官网 ...

  2. Hadoop3集群搭建之——安装hadoop,配置环境

    接上篇:Hadoop3集群搭建之——虚拟机安装 下篇:Hadoop3集群搭建之——配置ntp服务 Hadoop3集群搭建之——hive安装 Hadoop3集群搭建之——hbase安装及简单操作 上篇已 ...

  3. Hadoop 新生报道(二) hadoop2.6.0 集群系统版本安装和启动配置

    本次基于Hadoop2.6版本进行分布式配置,Linux系统是基于CentOS6.5 64位的版本.在此设置一个主节点和两个从节点. 准备3台虚拟机,分别为: 主机名 IP地址 master 192. ...

  4. 二进制部署1.23.4版本k8s集群-1-系统安装及环境准备

    1. 致谢 这篇文章参考了老男孩王导的视频,在此表示感谢和致敬! 2. 安装CentOS操作系统 系统镜像:CentOS-7-x86_64-DVD-2009.iso 安装过程略. 3. 环境准备 3. ...

  5. 在K8S集群中使用busybox-dig镜像,来作DNS解析分析

    以前,判断K8S里的DNS功能是否正常时,得想很多办法. 如果有了busybox-dig镜像,则作这事就简单多了. 如下命令,直接部署 kubectl run busybox -it --image= ...

  6. (转)解决k8s集群提示docker login问题(同样适用于Rancher)

    文章转自 https://blog.liv1020.com/ 参考文档:https://kubernetes.io/docs/concepts/containers/images/#configuri ...

  7. 企业运维实践-还不会部署高可用的kubernetes集群?使用kubeadm方式安装高可用k8s集群v1.23.7

    关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 文章目录: 0x00 前言简述 ...

  8. 第3篇K8S集群部署

      一.利用ansible部署kubernetes准备: 集群介绍 本系列文档致力于提供快速部署高可用k8s集群的工具,并且也努力成为k8s实践.使用的参考书:基于二进制方式部署和利用ansible- ...

  9. K8S(08)交付实战-交付jenkins到k8s集群

    k8s交付实战-交付jenkins到k8s集群 目录 k8s交付实战-交付jenkins到k8s集群 1 准备jenkins镜像 1.1 下载官方镜像 1.2 修改官方镜像 1.2.1 创建目录 1. ...

随机推荐

  1. 内部锁之一:锁介绍(偏向锁 & 轻量级锁 & 重量级锁 & 各自优缺点及场景)

    一.内部锁介绍 上篇文章<Synchronized之二:synchronized的实现原理>中向大家介绍了Synchronized原理及优化锁.现在我们应该知道,Synchronized是 ...

  2. UI:动画

    参考 UIView 层级管理.触摸.检测手机是否横屏.调整设备的方向 动画:为了提高用户的体验 View层的动画.UIlayer层的动画.UIView改变视图效果.UIlayer的绘图框架 #prag ...

  3. centos7搭建rsync

    两台主机(centos7): 172.16.0.109 server 172.16.0.106 client 一.在172.16.0.109上 yum -y install rsync #安装 mkd ...

  4. hibernate的基础学习--一对一关联

    一对一关系以丈夫和妻子模型 配置文件 妻子配置文件: <?xml version="1.0" encoding="utf-8" ?> <!DO ...

  5. Eclipse中,Open Type(Ctrl+Shift+T)失效后做法。

    好几天ctrl shift T都不好用了,一直认为是工程的问题,没太在意,反正ctrl shift R也可也,今天看同事的好用,于是到网上查了一下解决的方法,刚才试了一下,应该是这个问题,明天就去公司 ...

  6. 如何实现session的共享?

    1.以cookie加密的方式保存在客户端. 优点是减轻服务器端的压力 缺点是受到cookie的大小限制,可能占用一定带宽,因为每次请求会在头部附带一定大小的cookie信息,另外这种方式在用户禁止使用 ...

  7. rbenv更新ruby后rails命令无效的解决方案

    创建: 2017/11/02 更新: 2018/02/02 增加rbenv使用方法的链接     rbenv的使用: http://www.cnblogs.com/lancgg/p/8281739.h ...

  8. Android中R.java丢失不见的解决方案

    最近在做android项目时,遇到报错情况,于是clean了一下,结果java类全都报错了,里面的R下有红线,我一找,发现R.java不见了,真是奇怪,如下图(这是解决以后的): 然后查看了一下Con ...

  9. DNS递归查询、主从、加密认证、负载均衡

    环境同DNS练习之正向解析. 在sishen64主机上安装必要软件 [root@sishen64 ~]# yum install -y bind bind-chroot bind-libs bind- ...

  10. laravel 配置站点域名

    访问一直报404错误 laravel端: default.conf server {        listen       80;        server_name  api.xxxx.com; ...