理论上来说多台宿主机之间的docker容器之间是无法通讯的,但是多台宿主机之间的docker容器之间是可以通讯的,主要是通过VXLAN技术来实现的。

  GitHub上对于docker-overlay-network的介绍。

17.1 overlay网络和etcd实现多机容器通信

  docker在创建容器的时候默认会使用bridge网络,要实现多机容器间通信,需要使用overlay网络,但是要实现多机的容器通信,通信的两个容器的IP肯定不能一样,所以我们需要借助第三方的工具来实现。这里使用ectd

安装etcd

  在第一台服务器上安装

  1. [root@docker ~]# wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
  2. [root@docker ~]# tar xf etcd-v3.0.12-linux-amd64.tar.gz
  3. [root@docker ~]# cd etcd-v3.0.12-linux-amd64/
  4. [root@docker ~]# nohup ./etcd --name docker-node1 --initial-advertise-peer-urls http://192.168.205.10:2380 \
  5. --listen-peer-urls http://192.168.205.10:2380 \
  6. --listen-client-urls http://192.168.205.10:2379,http://127.0.0.1:2379 \
  7. --advertise-client-urls http://192.168.205.10:2379 \
  8. --initial-cluster-token etcd-cluster \
  9. --initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 \
  10. --initial-cluster-state new&
  11. [root@docker ~]#

  在第二台服务器上安装

  1. [root@docker ~]# wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
  2. [root@docker ~]# tar xf etcd-v3.0.12-linux-amd64.tar.gz
  3. [root@docker ~]# cd etcd-v3.0.12-linux-amd64/
  4. [root@docker ~]# nohup ./etcd --name docker-node2 --initial-advertise-peer-urls http://192.168.205.11:2380 \
  5. > --listen-peer-urls http://192.168.205.11:2380 \
  6. > --listen-client-urls http://192.168.205.11:2379,http://127.0.0.1:2379 \
  7. > --advertise-client-urls http://192.168.205.11:2379 \
  8. > --initial-cluster-token etcd-cluster \
  9. > --initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 \
  10. > --initial-cluster-state new&
  11. [root@docker ~]#

  检查cluster状态

  1. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl cluster-health
  2. member 21eca106efe4caee is healthy: got healthy result from http://192.168.205.10:2379
  3. member 8614974c83d1cc6d is healthy: got healthy result from http://192.168.205.11:2379
  4. cluster is healthy

重启docker服务

  在第一台服务器上重启

  1. systemctl stop docker.service
  2. /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.11:2379 --cluster-advertise=192.168.205.11:2375&

  在第二台服务器上重启

  1. systemctl stop docker.service
  2. /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.10:2379 --cluster-advertise=192.168.205.10:2375&

创建overlay network

  在其中任意一台服务器上创建一个overlay网络

  1. [root@docker ~]# docker network create -d overlay demo
  2. [root@docker ~]# docker network ls
  3. NETWORK ID NAME DRIVER SCOPE
  4. 038cb815ca11 bridge bridge local
  5. efeabebb2ed5 demo overlay global
  6. 674c97014876 host host local
  7. ac706f4efd8e none null local
  8. [root@docker ~]# docker network inspect demo
  9. [
  10. {
  11. "Name": "demo",
  12. "Id": "efeabebb2ed5b63e705cb2eb3b9f77109119a71fdb89d05b105db30ae25c06f6",
  13. "Created": "2018-06-06T09:50:59.567617763Z",
  14. "Scope": "global",
  15. "Driver": "overlay",
  16. "EnableIPv6": false,
  17. "IPAM": {
  18. "Driver": "default",
  19. "Options": {},
  20. "Config": [
  21. {
  22. "Subnet": "10.0.0.0/24",
  23. "Gateway": "10.0.0.1"
  24. }
  25. ]
  26. },
  27. "Internal": false,
  28. "Attachable": false,
  29. "Ingress": false,
  30. "ConfigFrom": {
  31. "Network": ""
  32. },
  33. "ConfigOnly": false,
  34. "Containers": {},
  35. "Options": {},
  36. "Labels": {}
  37. }
  38. ]
  39. [root@docker ~]#

  另一台服务器上的overlay网络也会被同步创建。这都是由etcd实现的:

  1. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl ls
  2. /docker
  3. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker
  4. /docker/nodes
  5. /docker/network
  6. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/nodes
  7. /docker/nodes/192.168.205.10:2375
  8. /docker/nodes/192.168.205.11:2375
  9. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network
  10. /docker/network/v1.0
  11. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network/v1.0
  12. /docker/network/v1.0/endpoint_count
  13. /docker/network/v1.0/endpoint
  14. /docker/network/v1.0/ipam
  15. /docker/network/v1.0/idm
  16. /docker/network/v1.0/overlay
  17. /docker/network/v1.0/network
  18. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network/v1.0/overlay
  19. /docker/network/v1.0/overlay/network
  20. [root@docker etcd-v3.0.12-linux-amd64]# ./etcdctl ls /docker/network/v1.0/overlay/network
  21. /docker/network/v1.0/overlay/network/efeabebb2ed5b63e705cb2eb3b9f77109119a71fdb89d05b105db30ae25c06f6
  22. [root@docker etcd-v3.0.12-linux-amd64]#

分别在两胎服务器上创建容器

  在第一台服务器上创建

  1. [root@docker ~]# docker run -d --name test1 --net demo busybox sh -c "while true; do sleep 3600; done"
  2. [root@docker ~]# docker ps
  3. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  4. 170e8edf81f2 busybox "sh -c 'while true; …" 3 minutes ago Up 3 minutes test1
  5. [root@docker ~]# docker exec -it test1 ip a
  6. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
  7. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  8. inet 127.0.0.1/8 scope host lo
  9. valid_lft forever preferred_lft forever
  10. 13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
  11. link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
  12. inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0
  13. valid_lft forever preferred_lft forever
  14. 15: eth1@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
  15. link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
  16. inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
  17. valid_lft forever preferred_lft forever
  18. [root@docker ~]#

  在第二台服务器上创建

  1. [root@docker ~]# docker run -d --name test2 --net demo busybox sh -c "while true; do sleep 3600; done"
  2. [root@docker ~]# docker ps
  3. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  4. 8b50c21f1337 busybox "sh -c 'while true; …" 2 minutes ago Up 2 minutes test2
  5. [root@docker ~]# docker exec -it test2 ip a
  6. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
  7. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  8. inet 127.0.0.1/8 scope host lo
  9. valid_lft forever preferred_lft forever
  10. 7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
  11. link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
  12. inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
  13. valid_lft forever preferred_lft forever
  14. 10: eth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
  15. link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
  16. inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
  17. valid_lft forever preferred_lft forever
  18. [root@docker ~]#

  查看demo网络信息:

  1. [root@docker ~]# docker network inspect demo
  2. [
  3. {
  4. "Name": "demo",
  5. "Id": "efeabebb2ed5b63e705cb2eb3b9f77109119a71fdb89d05b105db30ae25c06f6",
  6. "Created": "2018-06-06T09:50:59.567617763Z",
  7. "Scope": "global",
  8. "Driver": "overlay",
  9. "EnableIPv6": false,
  10. "IPAM": {
  11. "Driver": "default",
  12. "Options": {},
  13. "Config": [
  14. {
  15. "Subnet": "10.0.0.0/24",
  16. "Gateway": "10.0.0.1"
  17. }
  18. ]
  19. },
  20. "Internal": false,
  21. "Attachable": false,
  22. "Ingress": false,
  23. "ConfigFrom": {
  24. "Network": ""
  25. },
  26. "ConfigOnly": false,
  27. "Containers": {
  28. "170e8edf81f2bc216b926c52928c0e6977809387cc21db433c56d7b7d397f49b": {
  29. "Name": "test1",
  30. "EndpointID": "247454410f441b545c97c3d53cae508cbdbb9c2d91745381adf70580a77f8ec7",
  31. "MacAddress": "",
  32. "IPv4Address": "10.0.0.2/24",
  33. "IPv6Address": ""
  34. },
  35. "ep-5e95b84eff1dbb3fbdc6abb4daa0707e117dac66220222a2e22a75bf6b7eb09d": {
  36. "Name": "test2",
  37. "EndpointID": "5e95b84eff1dbb3fbdc6abb4daa0707e117dac66220222a2e22a75bf6b7eb09d",
  38. "MacAddress": "",
  39. "IPv4Address": "10.0.0.3/24",
  40. "IPv6Address": ""
  41. }
  42. },
  43. "Options": {},
  44. "Labels": {}
  45. }
  46. ]
  47. [root@docker ~]#

测试两个容器间能否通信

  1. [root@docker ~]# docker exec -it test1 ping 10.0.0.3
  2. PING 10.0.0.3 (10.0.0.3): 56 data bytes
  3. 64 bytes from 10.0.0.3: seq=0 ttl=64 time=3.251 ms
  4. 64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.693 ms
  5. 64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.591 ms
  6. 64 bytes from 10.0.0.3: seq=3 ttl=64 time=0.579 ms
  7. 64 bytes from 10.0.0.3: seq=4 ttl=64 time=0.776 ms
  8. ^C
  9. --- 10.0.0.3 ping statistics ---
  10. 5 packets transmitted, 5 packets received, 0% packet loss
  11. round-trip min/avg/max = 0.579/1.178/3.251 ms
  12. [root@docker ~]#
  13. [root@docker ~]# docker exec -it test1 ping test2
  14. ^C[vagrant@docker-node1 ~]$ docker exec -it test1 ping test2
  15. PING test2 (10.0.0.3): 56 data bytes
  16. 64 bytes from 10.0.0.3: seq=0 ttl=64 time=1.024 ms
  17. 64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.565 ms
  18. 64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.806 ms
  19. 64 bytes from 10.0.0.3: seq=3 ttl=64 time=0.597 ms
  20. 64 bytes from 10.0.0.3: seq=4 ttl=64 time=0.498 ms
  21. ^C
  22. --- test2 ping statistics ---
  23. 5 packets transmitted, 5 packets received, 0% packet loss
  24. round-trip min/avg/max = 0.498/0.698/1.024 ms
  25. [root@docker ~]#
  1. [root@docker ~]# docker exec -it test2 ping 10.0.0.2
  2. PING 10.0.0.2 (10.0.0.2): 56 data bytes
  3. 64 bytes from 10.0.0.2: seq=0 ttl=64 time=3.374 ms
  4. 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.531 ms
  5. 64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.499 ms
  6. ^C
  7. --- 10.0.0.2 ping statistics ---
  8. 3 packets transmitted, 3 packets received, 0% packet loss
  9. round-trip min/avg/max = 0.499/1.468/3.374 ms
  10. [root@docker ~]#
  11. [root@docker ~]# docker exec -it test2 ping test1
  12. PING test1 (10.0.0.2): 56 data bytes
  13. 64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.685 ms
  14. 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.754 ms
  15. 64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.642 ms
  16. 64 bytes from 10.0.0.2: seq=3 ttl=64 time=1.080 ms
  17. ^C
  18. --- test1 ping statistics ---
  19. 4 packets transmitted, 4 packets received, 0% packet loss
  20. round-trip min/avg/max = 0.642/0.790/1.080 ms
  21. [root@docker ~]#

17、docker多机网络通信overlay的更多相关文章

  1. 跨 Docker 宿主机网络 overlay 类型

    跨 Docker 宿主机网络 overlay 类型 前言 a. 本文主要为 Docker的视频教程 笔记. b. 环境为 三台 CentOS 7.0 虚拟机 (Vmware Workstation 1 ...

  2. Docker跨主机网络——overlay

    前言 在Docker网络--单host网络一文中,我为大家总结了Docker的单机网络相关知识和操作,单机网络比较容易.本文我为大家总结Docker跨主机通信相关知识.同样本文大部分内容以CloudM ...

  3. [docker]docker自带的overlay网络实战

    overlay网络实战 n3启动consul docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -b ...

  4. Docker跨服务器通信Overlay解决方案(下) Consul集群

    承接上文 本文基于上篇文章,详细的场景说明与分析在前篇随笔中业已记录,有兴趣可以移步 Docker跨服务器通信Overlay解决方案(上) Consul单实例 本文主旨 本文为Docker使用Cons ...

  5. Docker多机网络

    前言 前面的文章主要聚焦于单机网络上,对于生产环境而言,单机环境不满足高可用的特点,所以是不具备上生产的条件,因此在开始Docker Swarm篇的时候我们先来聊聊多机网络之间Docker的通信如何做 ...

  6. Docker 三剑客之 Docker Swarm(基于 overlay 组网通信)

    相关文章:Docker 三剑客之 Docker Swarm 这一篇主要是对 Docker Swarm 的完善,增加基于 overlay 组网通信,以便 Docker 容器可以跨主机访问. 不同主机间的 ...

  7. docker 环境下创建 overlay 网络方案

    一.环境 三台机器,其中一台安装 consul(192.168.1.21), 两台创建网络(192.168.1.32,33) 二.实现步骤 1.构建环境 1)三台机器部署docker环境 2)选择一台 ...

  8. Docker 跨主机网络 overlay(十六)

    目录 一.跨主机网络概述 二.准备 overlay 环境 1.环境描述 2.创建 consul 3.修改 docker 配置文件 4.准备就绪 三.创建 overlay 网络 1.在 host1 中创 ...

  9. docker跨主机通信-overlay

    使用consul 1,让两个网络环境下的容器互通,那么必然涉及到网络信息的同步,所以需要先配置一下consul. 直接运行下面命令.启动consul. docker run -d -p 8500:85 ...

随机推荐

  1. python 刷题必备

    1.判断输入的数字是否是回文数: 学习内容:把数字转成字符串 1. def is_palindrome(n): n=str(n) m=n[::-1] return n==m 2. tmp_str = ...

  2. iOS 11 scroll滚动偏移,tableview偏移44,获取view的宽和高

    1. tableview 的头部 有44的偏移量 1>.设置 tableview的 属性 tableView.scrollIndicatorInsets = UIEdgeInsets.zero ...

  3. popupMenu-----弹出菜单

    import android.os.Bundle; import android.app.Activity; import android.graphics.Color; import android ...

  4. google中guava类库:AsyncEventBus

    1.guava事件总线(AsyncEventBus)使用 1.1引入依赖 <dependency> <groupId>com.google.guava</groupId& ...

  5. DOMNodeInserted监听div内容改变

    $('.cw-icon .ci-count').on('DOMNodeInserted',function(){ $(".settleup-content .cont_loading&quo ...

  6. php 框架选择

    背景 很多初级php甚至中级php都会陷入框架选择困难症,要么必须使用什么框架,要么一定不使用什么框架,而对框架的选择带来的效益和负担的成本并不是很清晰 框架大概分为以下这些 1. 简单轻量:tp,c ...

  7. Netty 系列目录

    Netty 系列目录 二 Netty 源码分析(4.1.20) 1.1 Netty 源码(一)Netty 组件简介 2.1 Netty 源码(一)服务端启动 2.2 Netty 源码(二)客户端启动 ...

  8. swift http post json + 登录

    var nsUrl : NSURL = NSURL(string:API_HOST+"/"+LOGIN_API)! var request = NSMutableURLReques ...

  9. 学习socket的小例子

    /************************************************************** 技术博客 http://www.cnblogs.com/itdef/   ...

  10. RocketMQ的客户端连接数调查

    RocketMQ版本:3.4.6 ==问题现象== RocketMQ集群的某个topic,在一部分节点上消费有“断层”,这部分数据一致没办法消费. ==调查过程== 一顿操作猛如虎的调查之后发现, 该 ...