一、容器默认网络通信

Usage:    dockerd [OPTIONS]
Options:
--icc Enable inter-container communication (default true) --icc=false 可以禁用容器间网络通信

Dokcer 默认使用bridge模式,服务安装后会默认生成一个名为docker0 网段为 172.17.0.0/16 的网桥,启动的容器会在容器内及宿主机内创建一对虚拟网卡,并动态分配172.17.0.0/16 网段的地址,通过将宿主机虚拟网卡连接至网桥的式实现容器间及对外的网络通信

可通过安装 bridge-uitlls 工具查看网桥信息状态

[root@Docker-Ubu1804-p11:~]# apt install -y bridge-utils
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
ifupdown
The following NEW packages will be installed:
bridge-utils
0 upgraded, 1 newly installed, 0 to remove and 69 not upgraded.
Need to get 30.1 kB of archives.
After this operation, 102 kB of additional disk space will be used.
Get:1 http://cn.archive.ubuntu.com/ubuntu bionic/main amd64 bridge-utils amd64 1.5-15ubuntu1 [30.1 kB]
Fetched 30.1 kB in 1s (34.9 kB/s)
Selecting previously unselected package bridge-utils.
(Reading database ... 108899 files and directories currently installed.)
Preparing to unpack .../bridge-utils_1.5-15ubuntu1_amd64.deb ...
Unpacking bridge-utils (1.5-15ubuntu1) ...
Setting up bridge-utils (1.5-15ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ... #宿主机网卡信息
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ee:f6:4c:b1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:eeff:fef6:4cb1/64 scope link
valid_lft forever preferred_lft forever
9: veth16f14be@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether be:32:c5:e1:12:b3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::bc32:c5ff:fee1:12b3/64 scope link
valid_lft forever preferred_lft forever #网桥信息
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242eef64cb1 no veth16f14be #容器内网卡信息
[root@Docker-Ubu1804-p11:~]# docker exec app1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

默认允许同一宿主机内的容器间进行网络互联

[root@Docker-Ubu1804-p11:~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98266e2eae21 busybox "tail -f /etc/hosts" 2 minutes ago Up 2 minutes brave_bassi
3559cb35d921 janzen/app1 "nginx" 10 minutes ago Up 9 minutes 0.0.0.0:80->80/tcp, 443/tcp app1
94e0a35875d9 mysql "docker-entrypoint.s…" 3 days ago Exited (0) 3 days ago some-mysql
[root@Docker-Ubu1804-p11:~]# docker exec app1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# docker exec 98266e2eae21 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# docker exec -it app1 bash
[root@3559cb35d921 /]# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.204 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.049 ms
64 bytes from 172.17.0.3: icmp_seq=4 ttl=64 time=0.049 ms
64 bytes from 172.17.0.3: icmp_seq=5 ttl=64 time=0.051 ms
^C
--- 172.17.0.3 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4081ms
rtt min/avg/max/mdev = 0.049/0.080/0.204/0.062 ms
[root@3559cb35d921 /]# exit
exit
[root@Docker-Ubu1804-p11:~]# docker exec -it 98266e2eae21 sh/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.055 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.063 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.066 ms
64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.063 ms
^C
--- 172.17.0.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.055/0.061/0.066 ms
/ # exit

禁用容器间网络互联

#修改dockerd的systemd启动文件,添加 --icc=false 参数,禁用容器间网络访问
[root@Docker-Ubu1804-p11:~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --icc=false
[root@Docker-Ubu1804-p11:~]# systemctl daemon-reload
[root@Docker-Ubu1804-p11:~]# systemctl restart docker.service
[root@Docker-Ubu1804-p11:~]# ps -aux | grep docker
root 4994 0.3 8.4 838972 83096 ? Ssl 01:50 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --icc=false
root 5155 0.0 0.1 13216 1108 pts/0 S+ 01:51 0:00 grep --color=auto docker
[root@Docker-Ubu1804-p11:~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc66eb6cc1ca janzen/app1 "nginx" 15 seconds ago Up 14 seconds 0.0.0.0:80->80/tcp, 443/tcp app1
98266e2eae21 busybox "tail -f /etc/hosts" 22 minutes ago Up 5 seconds brave_bassi
94e0a35875d9 mysql "docker-entrypoint.s…" 3 days ago Exited (0) 3 days ago some-mysql
[root@Docker-Ubu1804-p11:~]# docker exec app1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# docker exec 98266 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# docker exec -it app1 bash
[root@cc66eb6cc1ca /]# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
^C
--- 172.17.0.3 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5102ms [root@cc66eb6cc1ca /]# exit
exit
[root@Docker-Ubu1804-p11:~]# docker exec -it 98266 sh
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
^C
--- 172.17.0.2 ping statistics ---
7 packets transmitted, 0 packets received, 100% packet loss
/ # exit

修改默认网络配置

Usage:    dockerd [OPTIONS]

A self-sufficient runtime for containers.

Options:
--bip string Specify network bridge IP
-b, --bridge string Attach containers to a network bridge

修改默认网桥使用的IP地址

[root@Docker-Ubu1804-p11:~]# vim /lib/systemd/system/docker.service
[root@Docker-Ubu1804-p11:~]# cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=192.168.17.1/24
[root@Docker-Ubu1804-p11:~]# systemctl daemon-reload
[root@Docker-Ubu1804-p11:~]# systemctl restart docker.service
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ee:f6:4c:b1 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:eeff:fef6:4cb1/64 scope link
valid_lft forever preferred_lft forever

修改默认网桥连接

#新建网桥br0,配置使用 192.168.19.0/24 网段
[root@Docker-Ubu1804-p11:~]# brctl addbr br0
[root@Docker-Ubu1804-p11:~]# ip a a 192.168.19.1/24 dev br0
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no
docker0 8000.0242eef64cb1 no
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ee:f6:4c:b1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:eeff:fef6:4cb1/64 scope link
valid_lft forever preferred_lft forever
16: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 76:17:d6:4b:2b:94 brd ff:ff:ff:ff:ff:ff
inet 192.168.19.1/24 scope global br0
valid_lft forever preferred_lft forever #修改dockerd启动文件,添加-b参数指定使用br0作为网络连接网桥
[root@Docker-Ubu1804-p11:~]# vim /lib/systemd/system/docker.service
[root@Docker-Ubu1804-p11:~]# cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0
[root@Docker-Ubu1804-p11:~]# systemctl daemon-reload
[root@Docker-Ubu1804-p11:~]# systemctl restart docker.service
[root@Docker-Ubu1804-p11:~]# ps aux | grep dockerd
root 6721 0.0 8.3 757044 82048 ? Ssl 02:18 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0
root 6870 0.0 0.1 13216 1148 pts/0 S+ 02:21 0:00 grep --color=auto dockerd #新建容器查看容器网卡信息,进行验证
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx janzen/nginx-centos7:1.20.1-v2.0
5fb23af783414778bdf8cfda82d9138446c762c397ebd3befe4fab6ee3782faa
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:13:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.19.2/24 brd 192.168.19.255 scope global eth0
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.4a165c89efc8 no veth7c6805a
docker0 8000.0242eef64cb1 no

二、容器名称互联

1、容器名称实现互联

1.1、容器名称介绍

在同一个宿主机上的容器之间可以通过自定义的容器名称进行相互访问,由于容器在启动时的名称是由DHCP随机分配的,因而使用相对固定的容器名称进行访问。

可以在docker run 创建容器时添加 --link 参数,实现容器名称的引用

注意:被引用容器必须提前创建

注意:如果被引用容器发生地址变化,需要重启当前容器才能重新获取被引用容器的新地址

Usage:    docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

Options:
--link list Add link to another container

1.2、使用容器名称进行网络访问

#创建mysql容器,创建nginx容器指定连接 mysql
[root@Docker-Ubu1804-p11:~]# docker run --name mysql -e MYSQL_ROOT_PASSWORD=passwd -d mysql
17ad715abb2424b24a8c77c3a202b0b2fa732bff84d63b5911c0c2288fab41a6
[root@Docker-Ubu1804-p11:~]# docker run --name nginx --link mysql -d janzen/nginx-centos7:1.20.1-v2.0
e8e2515a4426de167f41deaa4dbb79c9a087933e2df3076645d1cdc0a0c6766e
[root@Docker-Ubu1804-p11:~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e8e2515a4426 janzen/nginx-centos7:1.20.1-v2.0 "nginx" 22 seconds ago Up 20 seconds 80/tcp, 443/tcp nginx
17ad715abb24 mysql "docker-entrypoint.s…" About a minute ago Up About a minute 3306/tcp, 33060/tcp mysql
[root@Docker-Ubu1804-p11:~]# docker inspect mysql -f "{{.NetworkSettings}}"
{{ 693f42a505b49fd8580e3be09c1da1391a5aa98de790bbf986103f76005cb270 false 0 map[3306/tcp:[] 33060/tcp:[]] /var/run/docker/netns/693f42a505b4 [] []} {d57c4d5b5c72ad0d38b780368c4ee7dafc449258a5e28e0cf5151c7883742d9f 192.168.19.1 0 192.168.19.2 24 02:42:c0:a8:13:02} map[bridge:0xc0003015c0]}
[root@Docker-Ubu1804-p11:~]# docker inspect nginx -f "{{.NetworkSettings}}"
{{ e761523b5361d148f107f0cc124d5a4aea054cfd99e0e39feb2a35a8119791d7 false 0 map[443/tcp:[] 80/tcp:[]] /var/run/docker/netns/e761523b5361 [] []} {48f9df6fa35cb7becb7e3c9af748e2497edbf25940644861b6bfd23bc01329d7 192.168.19.1 0 192.168.19.3 24 02:42:c0:a8:13:03} map[bridge:0xc0002ff5c0]}
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx bash
[root@e8e2515a4426 /]# cat /etc/host
host.conf hostname hosts hosts.allow hosts.deny
[root@e8e2515a4426 /]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.19.2 mysql 17ad715abb24
192.168.19.3 e8e2515a4426
[root@e8e2515a4426 /]# ping mysql
PING mysql (192.168.19.2) 56(84) bytes of data.
64 bytes from mysql (192.168.19.2): icmp_seq=1 ttl=64 time=0.167 ms
64 bytes from mysql (192.168.19.2): icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from mysql (192.168.19.2): icmp_seq=3 ttl=64 time=0.047 ms
^C
--- mysql ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2034ms
rtt min/avg/max/mdev = 0.047/0.092/0.167/0.053 ms
[root@e8e2515a4426 /]# exit
exit #验证容器IP发生变化
[root@Docker-Ubu1804-p11:~]# docker stop mysql
mysql
[root@Docker-Ubu1804-p11:~]# docker run -d busybox tail -f /etc/hosts
c3cbdbc623b1026cb3efffb613d0f4f1e7dae2473db0e3eb77551c319430c61a
[root@Docker-Ubu1804-p11:~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c3cbdbc623b1 busybox "tail -f /etc/hosts" 7 seconds ago Up 6 seconds admiring_wescoff
e8e2515a4426 janzen/nginx-centos7:1.20.1-v2.0 "nginx" 7 minutes ago Up 7 minutes 80/tcp, 443/tcp nginx
[root@Docker-Ubu1804-p11:~]# docker inspect c3cbdbc623b1 -f "{{.NetworkSettings}}"
{{ 616e505a4a2454ad05156c1e8c524c8ce366efa350cf5ad41565a4256d02005d false 0 map[] /var/run/docker/netns/616e505a4a24 [] []} {2513e7c313d0d8e18a47d5836c7647dcb06053bfe80025ca13ae2f75aaada014 192.168.19.1 0 192.168.19.2 24 02:42:c0:a8:13:02} map[bridge:0xc0002ff5c0]}
[root@Docker-Ubu1804-p11:~]# docker start mysql
mysql
[root@Docker-Ubu1804-p11:~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c3cbdbc623b1 busybox "tail -f /etc/hosts" 57 seconds ago Up 56 seconds admiring_wescoff
e8e2515a4426 janzen/nginx-centos7:1.20.1-v2.0 "nginx" 8 minutes ago Up 8 minutes 80/tcp, 443/tcp nginx
17ad715abb24 mysql "docker-entrypoint.s…" 8 minutes ago Up 4 seconds 3306/tcp, 33060/tcp mysql
[root@Docker-Ubu1804-p11:~]# docker inspect mysql -f "{{.NetworkSettings}}"
{{ e08e48e61767595fdc37df3590f1725666169a1c46f21060170e540a18ef6d42 false 0 map[3306/tcp:[] 33060/tcp:[]] /var/run/docker/netns/e08e48e61767 [] []} {f7e34ad6702fadf1c9d13ce53c13e0e25a1513004b3b14ddf49ba399d3ddc13a 192.168.19.1 0 192.168.19.4 24 02:42:c0:a8:13:04} map[bridge:0xc0002ff5c0]}
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx bash
[root@e8e2515a4426 /]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.19.4 mysql 17ad715abb24
192.168.19.3 e8e2515a4426
[root@e8e2515a4426 /]# ping mysql
PING mysql (192.168.19.4) 56(84) bytes of data.
64 bytes from mysql (192.168.19.4): icmp_seq=1 ttl=64 time=0.187 ms
64 bytes from mysql (192.168.19.4): icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from mysql (192.168.19.4): icmp_seq=3 ttl=64 time=0.046 ms
64 bytes from mysql (192.168.19.4): icmp_seq=4 ttl=64 time=0.111 ms
64 bytes from mysql (192.168.19.4): icmp_seq=5 ttl=64 time=0.048 ms
64 bytes from mysql (192.168.19.4): icmp_seq=6 ttl=64 time=0.045 ms
^C
--- mysql ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5067ms
rtt min/avg/max/mdev = 0.045/0.081/0.187/0.052 ms
[root@e8e2515a4426 /]# exit
exit
[root@Docker-Ubu1804-p11:~]#

2、容器别名实现互联

2.1、容器别名介绍

命名格式:

docker run --name <容器名称>
docker run --link <目标容器名称>:<目标容器别名>

2.2、使用容器别名进行网络访问

#创建新容器,引用前面创建的nginx容器,并创建别名
[root@Docker-Ubu1804-p11:~]# docker run -it --rm --link nginx:nginx-server1 alpine sh
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.19.3 nginx-server1 e8e2515a4426 nginx
192.168.19.5 dbf6642469fd
/ # ping nginx
PING nginx (192.168.19.3): 56 data bytes
64 bytes from 192.168.19.3: seq=0 ttl=64 time=0.234 ms
64 bytes from 192.168.19.3: seq=1 ttl=64 time=0.074 ms
64 bytes from 192.168.19.3: seq=2 ttl=64 time=0.065 ms
^C
--- nginx ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.065/0.124/0.234 ms
/ # ping nginx-server1
PING nginx-server1 (192.168.19.3): 56 data bytes
64 bytes from 192.168.19.3: seq=0 ttl=64 time=0.128 ms
64 bytes from 192.168.19.3: seq=1 ttl=64 time=0.072 ms
64 bytes from 192.168.19.3: seq=2 ttl=64 time=0.062 ms
^C
--- nginx-server1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.062/0.087/0.128 ms
/ # exit #创建新容器,引用前面创建的mysql容器,创建多个别名
[root@Docker-Ubu1804-p11:~]# docker run -it --rm --link mysql:"mysql-node0 mysql-node1" alpine sh
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.19.4 mysql-node0 mysql-node1 17ad715abb24 mysql
192.168.19.5 0eb3e54c1a5e
/ # ping mysql
PING mysql (192.168.19.4): 56 data bytes
64 bytes from 192.168.19.4: seq=0 ttl=64 time=0.093 ms
64 bytes from 192.168.19.4: seq=1 ttl=64 time=0.067 ms
64 bytes from 192.168.19.4: seq=2 ttl=64 time=0.068 ms
^C
--- mysql ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.067/0.076/0.093 ms
/ # ping mysql-node0
PING mysql-node0 (192.168.19.4): 56 data bytes
64 bytes from 192.168.19.4: seq=0 ttl=64 time=0.052 ms
64 bytes from 192.168.19.4: seq=1 ttl=64 time=0.076 ms
64 bytes from 192.168.19.4: seq=2 ttl=64 time=0.073 ms
64 bytes from 192.168.19.4: seq=3 ttl=64 time=0.065 ms
^C
--- mysql-node0 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.052/0.066/0.076 ms
/ # ping mysql-node1
PING mysql-node1 (192.168.19.4): 56 data bytes
64 bytes from 192.168.19.4: seq=0 ttl=64 time=0.055 ms
64 bytes from 192.168.19.4: seq=1 ttl=64 time=0.076 ms
64 bytes from 192.168.19.4: seq=2 ttl=64 time=0.077 ms
64 bytes from 192.168.19.4: seq=3 ttl=64 time=0.075 ms
^C
--- mysql-node1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.055/0.070/0.077 ms
/ # exit

三、Docker 网络连接模式

1、容器网络连接模式介绍

Docker支持的5种网络模式

官方文档介绍:https://docs.docker.com/config/containers/container-networking/

  • none
  • bridge
  • host
  • container
  • network-name
[root@Docker-Ubu1804-p11:~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
40cf956014a3 bridge bridge local
e33dad33c534 host host local
71f677643168 none null local

2、容器网络模式指定

Docker默认使用 Brideg网络模式,可在创建容器时使用 docker run 命令添加 网络选项指定网络模式

docker run --network <mode>
docker run --net=<mode> <mode> 可选选项
none
bridge
host
container:<容器名称或容器ID>
<自定义网络名称>

3、Bridge 网络模式

此模式是Docker默认的网络模式,此模式创建的容器会为每一个容器自动分配自己网络内的IP,并将容器通过连接到一个虚拟网桥实现对外通信

可以和外部网络通信,通过SNAT访问外网,使用DNAT可以让容器被外部访问,因此此模式也被称为NAT模式

此模式宿主机需要启动 ip_forward 功能

Bridge 网络模式特点

  • 网络资源隔离:不同宿主机的容器无法直接通信,各自使用独立的网络
  • 无需手动配置:容器默认获取172.17.0.0/16 网段地址,此地址可手动修改
  • 可访问外网:利用宿主机的物理网卡,SNAT连接外网
  • 外部主机无法直接访问容器:可以通过配置DNAT接受外网访问
  • 性能较低:因为可通过NAT,网络转换带来部分资源损耗
  • 端口管理繁琐:每个容器必须指定唯一端口,因此可能带来更多的端口冲突

默认Bridge配置

#默认Bridge配置
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "40cf956014a38ee53ecc7c2f36c87fae1c7c903f93e291b146657c1ec7dd9ef3",
"Created": "2023-05-02T22:28:34.35625987+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4dbcfaf0ed837a51fab95845a3cc6d3a7223a8b8441a2793f8888b222c746b46": {
"Name": "app1",
"EndpointID": "382dbe3365550872277ef6f15b19cb79ba01a10098a5fb99eba14e210bc4d0ff",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
#宿主机网络状态
[root@Docker-Ubu1804-p11:~]# cat /proc/sys/net/ipv4/ip_forward
1
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "40cf956014a38ee53ecc7c2f36c87fae1c7c903f93e291b146657c1ec7dd9ef3",
"Created": "2023-05-02T22:28:34.35625987+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4dbcfaf0ed837a51fab95845a3cc6d3a7223a8b8441a2793f8888b222c746b46": {
"Name": "app1",
"EndpointID": "382dbe3365550872277ef6f15b19cb79ba01a10098a5fb99eba14e210bc4d0ff",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

修改Bridge配置网段

#通过指定 dockerd 启动参数修改Bridge网段配置
[root@Docker-Ubu1804-p11:~]# vim /lib/systemd/system/docker.service
[root@Docker-Ubu1804-p11:~]# cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=192.168.17.1/24
[root@Docker-Ubu1804-p11:~]# systemctl daemon-reload
[root@Docker-Ubu1804-p11:~]# systemctl restart docker.service
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ee:f6:4c:b1 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:eeff:fef6:4cb1/64 scope link
valid_lft forever preferred_lft forever #通过修改配置文件,修改Bridge网段配置
[root@Docker-Ubu1804-p11:~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://hub-mirror.c.163.com","https://po3g231a.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn"],
"bip": "192.168.17.1/24",  #配置docker0使用的IP,24是容器IP的掩码
"fixed-cidr": "192.168.17.128/26",  #配置自动分配给容器的网络范围,26不代表地址掩码,代表网段地址范围,
"default-gateway": "192.168.17.254"  #网关地址必须与bip地址在同一网段,默认为 docker0地址
}
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:3a:65:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b4ff:fe3a:6543/64 scope link
valid_lft forever preferred_lft forever

配置默认Bridge模式容器

[root@Docker-Ubu1804-p11:~]# docker run -d --name centos janzen/centos7:v1.0 tail -f /etc/hosts
02389ba56fbda68357902856e5e72b07bc3bf8adef40c7b64fe1acc5637e68e4
[root@Docker-Ubu1804-p11:~]# docker inspect centos -f "{{.NetworkSettings}}"
{{ 4be143e167f90088a90944196b660bc616f80c82cb2193cab40b0c40322a7912 false 0 map[] /var/run/docker/netns/4be143e167f9 [] []} {f2f5b8d19eb7cd146938bceb4711cf1846f75e90b8537565ca803e2f61e12b9b 192.168.17.254 0 192.168.17.128 24 02:42:c0:a8:11:80} map[bridge:0xc0002ff5c0]}
[root@Docker-Ubu1804-p11:~]# docker exec -it centos bash
[root@02389ba56fbd /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.128/24 brd 192.168.17.255 scope global eth0
valid_lft forever preferred_lft forever

4、host 网络模式

如果指定指定Host模式启动的容器,新建容器不会创建自己的虚拟网卡, 而是直接使用宿主机的网卡和IP,因此在容器中看到的网卡信息实际是宿主机的网络信息,访问容器之间使用 宿主机IP+容器端口 即可,不过容器内的除网络外的其他系统依旧保持隔离状态。此模式由于直接使用宿主机网卡,无需进行NAT转换,因此网络性能最好,但是多个容器内使用的端口不能重复,适用于运行的容器端口较为固定的业务。

Host 网络模式特点

  • 使用参数 --network host 固定
  • 共享宿主机网络
  • 网络性能无损耗
  • 网络排障相对简单
  • 各容器之间网络无隔离
  • 各网络资源访问无法分别统计
  • 端口管理难度较大:容易产生端口冲突
  • 不支持端口映射

配置Host模式容器

#查看宿主机网络信息
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:3a:65:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b4ff:fe3a:6543/64 scope link
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# ss -ntl | grep 80 #创建容器指定网络模式为 Host
[root@Docker-Ubu1804-p11:~]# docker run -d --name app1-host --network host janzen/app1:v3.0
4dddedc405962e76712ffc9e200ef2cf850517ebea7eca8be3e60e0f88e31270 #查看Host模式容器启动后,宿主机监听端口变化
[root@Docker-Ubu1804-p11:~]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=9500,fd=6),("nginx",pid=9487,fd=6))
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=905,fd=13))
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1105,fd=3))
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:* users:(("sshd",pid=1979,fd=10))
LISTEN 0 128 [::]:80 [::]:* users:(("nginx",pid=9500,fd=7),("nginx",pid=9487,fd=7))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1105,fd=4))
LISTEN 0 128 [::1]:6010 [::]:* users:(("sshd",pid=1979,fd=9))
[root@Docker-Ubu1804-p11:~]# docker port app1-host #进入容器查看容器内网络信息,看到容器hostname 引用的是宿主机的hostname,网卡信息也与宿主机一致
[root@Docker-Ubu1804-p11:~]# docker exec -it app1-host bash
[root@Docker-Ubu1804-p11 /]# hostname
Docker-Ubu1804-p11.janzen.com
[root@Docker-Ubu1804-p11 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:3a:65:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b4ff:fe3a:6543/64 scope link
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11 /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:* users:(("nginx",pid=1,fd=6))
LISTEN 0 128 127.0.0.53%lo:53 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 127.0.0.1:6010 *:*
LISTEN 0 128 [::]:80 [::]:* users:(("nginx",pid=1,fd=7))
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
#指定端口映射无效
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx-host --network host -p 8080:80 janzen/nginx-centos7:1.20.1-v2.0
WARNING: Published ports are discarded when using host network mode
a91677c1a43de4d55e6e866a76a119716a24e80c2a796d6891b05ec226122278
[root@Docker-Ubu1804-p11:~]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:6011 0.0.0.0:* users:(("sshd",pid=17610,fd=10))
LISTEN 0 128 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=18743,fd=6),("nginx",pid=18726,fd=6))
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=905,fd=13))
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1105,fd=3))
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:* users:(("sshd",pid=1979,fd=10))
LISTEN 0 128 [::1]:6011 [::]:* users:(("sshd",pid=17610,fd=9))
LISTEN 0 128 [::]:80 [::]:* users:(("nginx",pid=18743,fd=7),("nginx",pid=18726,fd=7))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1105,fd=4))
LISTEN 0 128 [::1]:6010 [::]:* users:(("sshd",pid=1979,fd=9))

5、container 网络模式

使用Container网络模式创建的容器需要指定一个已经存在的容器,共享指定容器的网络,不会创建自己的独立网卡也不会配置自己的IP,也不与宿主机共享网络,而是和被指定的容器共享IP和端口范围,因此这个而且不能和被指定的容器有端口冲突,除了网络以外的资源仍保持相互隔离状态,两个容器间的进程可以通过lo网卡进行通信。

Container 网络模式特点

  • 使用参数 --network container:容器名称或容器ID 指定
  • 与宿主机和其他容器网络隔离
  • 与指定容器间共享网络空间
  • 适合频繁访问的容器间网络通信
  • 直接使用对方网络,较少使用

配置Container模式容器(共享Bridge模式容器网络)

#创建容器nginx-bridge,使用默认网络模式(Bridge)
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx-bridge janzen/nginx-centos7:1.20.1-v2.0
7e05357b5021a7343ef1976eea327812dde68c02097918215484ff629a76b24c
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx-bridge bash
[root@7e05357b5021 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.128/24 brd 192.168.17.255 scope global eth0
valid_lft forever preferred_lft forever
[root@7e05357b5021 /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:* users:(("nginx",pid=1,fd=6))
LISTEN 0 128 [::]:80 [::]:* users:(("nginx",pid=1,fd=7))
[root@7e05357b5021 /]# exit
exit
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "26b9f395f93fbe111f9ebaf387ea433e2e14e6ef4c726648ed6c2a8a6676e374",
"Created": "2023-05-03T01:04:57.132159917+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.17.0/24",
"IPRange": "192.168.17.128/30",
"Gateway": "192.168.17.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7e05357b5021a7343ef1976eea327812dde68c02097918215484ff629a76b24c": {
"Name": "nginx-bridge",
"EndpointID": "572cd7929b5af2bba6c041dbe41bf5d98b2fe4a69b6f268773b7c90233bdf905",
"MacAddress": "02:42:c0:a8:11:80",
"IPv4Address": "192.168.17.128/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
] #创建容器,引用容器nginx-bridge网络模式(Bridge)
[root@Docker-Ubu1804-p11:~]# docker run -it --name centos --network container:nginx-bridge janzen/centos7:v1.0 bash
[root@7e05357b5021 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.128/24 brd 192.168.17.255 scope global eth0
valid_lft forever preferred_lft forever
[root@7e05357b5021 /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:*
LISTEN 0 128 [::]:80 [::]:*
[root@7e05357b5021 /]# curl 127.0.0.1
<h1>nginx-1.20.1 base centOS 7 on docker</h1>
[root@7e05357b5021 /]# exit

配置Container模式容器(共享Host模式容器网络)

#创建容器nginx-host,使用Host网络模式
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx-host --network host janzen/nginx-centos7:1.20.1-v2.0
d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx-host bash
[root@Docker-Ubu1804-p11 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:3a:65:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b4ff:fe3a:6543/64 scope link
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11 /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:* users:(("nginx",pid=1,fd=6))
LISTEN 0 128 127.0.0.53%lo:53 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 127.0.0.1:6010 *:*
LISTEN 0 128 [::]:80 [::]:* users:(("nginx",pid=1,fd=7))
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
[root@Docker-Ubu1804-p11 /]# curl 127.0.0.1
<h1>nginx-1.20.1 base centOS 7 on docker</h1>
[root@Docker-Ubu1804-p11 /]# curl 10.0.0.11
<h1>nginx-1.20.1 base centOS 7 on docker</h1>
[root@Docker-Ubu1804-p11 /]# exit
exit #创建centos容器,引用nginx-host容器网络
[root@Docker-Ubu1804-p11:~]# docker run -it --name centos --network container:nginx-host janzen/centos7:v1.0 bash
[root@Docker-Ubu1804-p11 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:3a:65:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b4ff:fe3a:6543/64 scope link
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11 /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:*
LISTEN 0 128 127.0.0.53%lo:53 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 127.0.0.1:6010 *:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
[root@Docker-Ubu1804-p11 /]# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 Docker-Ubu1804-p11.janzen.com # The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
[root@Docker-Ubu1804-p11 /]# curl 127.0.0.1
<h1>nginx-1.20.1 base centOS 7 on docker</h1>
[root@Docker-Ubu1804-p11 /]# curl 10.0.0.11
<h1>nginx-1.20.1 base centOS 7 on docker</h1>
[root@Docker-Ubu1804-p11 /]# exit
exit
[root@Docker-Ubu1804-p11:~]# docker inspect host
[
{
"Name": "host",
"Id": "e33dad33c534de2ab3cddbb789673284f71213e6c692592be1fa9ef48d361212",
"Created": "2023-04-23T10:22:38.456506929+08:00",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a": {
"Name": "nginx-host",
"EndpointID": "9162522dabdb1c002e80f0eac272c9c532f68b7c799ff74eacf8aab0fada83b2",
"MacAddress": "",
"IPv4Address": "",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# docker inspect nginx-host
[
{
"Id": "d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a",
"Created": "2023-05-02T17:21:52.077607782Z",
"Path": "nginx",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 16460,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-05-02T17:21:52.424335491Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:4919aacb5ea0aa5d93a5f386f0df115c74cf774ff2df2bd68caf12b66fee3fe7",
"ResolvConfPath": "/var/lib/docker/containers/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a/hostname",
"HostsPath": "/var/lib/docker/containers/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a/hosts",
"LogPath": "/var/lib/docker/containers/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a-json.log",
"Name": "/nginx-host",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/65b930f040362793b56db76574bc16a8508e4cb5586bc1a3beff3614ce5db5e9-init/diff:/var/lib/docker/overlay2/f8d7a1d5eb0d8502ae92b147370ca2f98a04f499ca8d15b9d64e93f77ddf4f60/diff:/var/lib/docker/overlay2/5d97937e774ff42c6d67fbce8ce268f5d8b517e435a077996a7d7e7807ac0a81/diff",
"MergedDir": "/var/lib/docker/overlay2/65b930f040362793b56db76574bc16a8508e4cb5586bc1a3beff3614ce5db5e9/merged",
"UpperDir": "/var/lib/docker/overlay2/65b930f040362793b56db76574bc16a8508e4cb5586bc1a3beff3614ce5db5e9/diff",
"WorkDir": "/var/lib/docker/overlay2/65b930f040362793b56db76574bc16a8508e4cb5586bc1a3beff3614ce5db5e9/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "Docker-Ubu1804-p11.janzen.com",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "janzen/nginx-centos7:1.20.1-v2.0",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"nginx"
],
"OnBuild": null,
"Labels": {
"author": "janzen<janzen.com>",
"description": "Installed nginx-1.20.1-10.el7 by yum",
"org.label-schema.build-date": "20201113",
"org.label-schema.license": "GPLv2",
"org.label-schema.name": "CentOS Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "CentOS",
"org.opencontainers.image.created": "2020-11-13 00:00:00+00:00",
"org.opencontainers.image.licenses": "GPL-2.0-only",
"org.opencontainers.image.title": "CentOS Base Image",
"org.opencontainers.image.vendor": "CentOS",
"version": "v2.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "d991189faf1f97b644805eb5644142e377ffc07a9f77ffb5df43897a12255aa1",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/default",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "e33dad33c534de2ab3cddbb789673284f71213e6c692592be1fa9ef48d361212",
"EndpointID": "9162522dabdb1c002e80f0eac272c9c532f68b7c799ff74eacf8aab0fada83b2",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
}
]
[root@Docker-Ubu1804-p11:~]# docker inspect centos
[
{
"Id": "8ba2197c8f659751a5bcd2fe253034f2513530490b49ab8544d0c39a83112b8d",
"Created": "2023-05-02T17:23:50.562114064Z",
"Path": "bash",
"Args": [],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-05-02T17:23:50.817315423Z",
"FinishedAt": "2023-05-02T17:24:59.069363409Z"
},
"Image": "sha256:b9d392225b3e0e7a409f577c7100e38c7f3928aa2f38890e1f839c2aa1147335",
"ResolvConfPath": "/var/lib/docker/containers/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a/hostname",
"HostsPath": "/var/lib/docker/containers/d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a/hosts",
"LogPath": "/var/lib/docker/containers/8ba2197c8f659751a5bcd2fe253034f2513530490b49ab8544d0c39a83112b8d/8ba2197c8f659751a5bcd2fe253034f2513530490b49ab8544d0c39a83112b8d-json.log",
"Name": "/centos",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "container:d38104d63dc27e4e45d12750f44d9e0a31b6dfa6631899d1d76863c864004c4a",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/38ebb17863516ff4dfa147160436d3574642748bbd06ba1461cd2ac5306aefbb-init/diff:/var/lib/docker/overlay2/29086581ca9e32cb946203f99b9831141854a8b22a59ecd79148eb7bbf43ca5d/diff:/var/lib/docker/overlay2/5d97937e774ff42c6d67fbce8ce268f5d8b517e435a077996a7d7e7807ac0a81/diff",
"MergedDir": "/var/lib/docker/overlay2/38ebb17863516ff4dfa147160436d3574642748bbd06ba1461cd2ac5306aefbb/merged",
"UpperDir": "/var/lib/docker/overlay2/38ebb17863516ff4dfa147160436d3574642748bbd06ba1461cd2ac5306aefbb/diff",
"WorkDir": "/var/lib/docker/overlay2/38ebb17863516ff4dfa147160436d3574642748bbd06ba1461cd2ac5306aefbb/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "Docker-Ubu1804-p11.janzen.com",
"Domainname": "",
"User": "",
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Tty": true,
"OpenStdin": true,
"StdinOnce": true,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"bash"
],
"Image": "janzen/centos7:v1.0",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"author": "janzen<janzen.com>",
"description": "BaseImage by centos7,used repo [Base] [EPEL7] from aliyun",
"org.label-schema.build-date": "20201113",
"org.label-schema.license": "GPLv2",
"org.label-schema.name": "CentOS Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "CentOS",
"org.opencontainers.image.created": "2020-11-13 00:00:00+00:00",
"org.opencontainers.image.licenses": "GPL-2.0-only",
"org.opencontainers.image.title": "CentOS Base Image",
"org.opencontainers.image.vendor": "CentOS",
"version": "v1.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
}
]

配置Container模式容器(共享none模式容器网络)

#创建容器nginx-none,使用none网络模式
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx-none --network none janzen/nginx-centos7:1.20.1-v2.0
6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx-none bash
[root@6a52c72b7741 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
[root@6a52c72b7741 /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:* users:(("nginx",pid=1,fd=6))
LISTEN 0 128 [::]:80 [::]:* users:(("nginx",pid=1,fd=7))
[root@6a52c72b7741 /]# curl 127.0.0.1
<h1>nginx-1.20.1 base centOS 7 on docker</h1>
[root@6a52c72b7741 /]# exit
exit
[root@Docker-Ubu1804-p11:~]# docker inspect nginx-none
[
{
"Id": "6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d",
"Created": "2023-05-02T17:36:48.594135331Z",
"Path": "nginx",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 17395,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-05-02T17:36:49.030458044Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:4919aacb5ea0aa5d93a5f386f0df115c74cf774ff2df2bd68caf12b66fee3fe7",
"ResolvConfPath": "/var/lib/docker/containers/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d/hostname",
"HostsPath": "/var/lib/docker/containers/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d/hosts",
"LogPath": "/var/lib/docker/containers/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d-json.log",
"Name": "/nginx-none",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "none",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/006632eeee685bae39d1681fdc50c32701a92f74d8334c562a36d02520f9febb-init/diff:/var/lib/docker/overlay2/f8d7a1d5eb0d8502ae92b147370ca2f98a04f499ca8d15b9d64e93f77ddf4f60/diff:/var/lib/docker/overlay2/5d97937e774ff42c6d67fbce8ce268f5d8b517e435a077996a7d7e7807ac0a81/diff",
"MergedDir": "/var/lib/docker/overlay2/006632eeee685bae39d1681fdc50c32701a92f74d8334c562a36d02520f9febb/merged",
"UpperDir": "/var/lib/docker/overlay2/006632eeee685bae39d1681fdc50c32701a92f74d8334c562a36d02520f9febb/diff",
"WorkDir": "/var/lib/docker/overlay2/006632eeee685bae39d1681fdc50c32701a92f74d8334c562a36d02520f9febb/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "6a52c72b7741",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "janzen/nginx-centos7:1.20.1-v2.0",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"nginx"
],
"OnBuild": null,
"Labels": {
"author": "janzen<janzen.com>",
"description": "Installed nginx-1.20.1-10.el7 by yum",
"org.label-schema.build-date": "20201113",
"org.label-schema.license": "GPLv2",
"org.label-schema.name": "CentOS Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "CentOS",
"org.opencontainers.image.created": "2020-11-13 00:00:00+00:00",
"org.opencontainers.image.licenses": "GPL-2.0-only",
"org.opencontainers.image.title": "CentOS Base Image",
"org.opencontainers.image.vendor": "CentOS",
"version": "v2.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "10f8607e15d919e030e92a60003140aa67c8e530e84f86e8f7077cc3b7f5f885",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/10f8607e15d9",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"none": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "71f67764316856316f5c032b8ca69544ed263116280c421094e407dd7d1714f5",
"EndpointID": "d0a5fef6438d37efe21a94be63e502ab34eb60b6e3c661a4e78588decbd7c7ed",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
}
]
[root@Docker-Ubu1804-p11:~]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:6011 0.0.0.0:* users:(("sshd",pid=17610,fd=10))
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=905,fd=13))
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1105,fd=3))
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:* users:(("sshd",pid=1979,fd=10))
LISTEN 0 128 [::1]:6011 [::]:* users:(("sshd",pid=17610,fd=9))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1105,fd=4))
LISTEN 0 128 [::1]:6010 [::]:* users:(("sshd",pid=1979,fd=9)) #创建容器centos,引用nginx-none的网络
[root@Docker-Ubu1804-p11:~]# docker run -it --name centos --network container:nginx-none janzen/centos7:v1.0 bash
[root@6a52c72b7741 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
[root@6a52c72b7741 /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:*
LISTEN 0 128 [::]:80 [::]:*
[root@6a52c72b7741 /]# ip route
[root@6a52c72b7741 /]# curl 127.0.0.1
<h1>nginx-1.20.1 base centOS 7 on docker</h1>
[root@6a52c72b7741 /]# ping 10.0.0.11
connect: Network is unreachable
[root@6a52c72b7741 /]# ping 192.168.17.1
connect: Network is unreachable
[root@6a52c72b7741 /]# exit
exit
[root@Docker-Ubu1804-p11:~]# docker inspect centos
[
{
"Id": "5c1d373f2ba769e7740a0203cfb9407c1159740f13e1ef35ad57a3115572ae34",
"Created": "2023-05-02T17:41:21.17664639Z",
"Path": "bash",
"Args": [],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 2,
"Error": "",
"StartedAt": "2023-05-02T17:41:21.406835425Z",
"FinishedAt": "2023-05-02T17:42:14.560135375Z"
},
"Image": "sha256:b9d392225b3e0e7a409f577c7100e38c7f3928aa2f38890e1f839c2aa1147335",
"ResolvConfPath": "/var/lib/docker/containers/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d/hostname",
"HostsPath": "/var/lib/docker/containers/6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d/hosts",
"LogPath": "/var/lib/docker/containers/5c1d373f2ba769e7740a0203cfb9407c1159740f13e1ef35ad57a3115572ae34/5c1d373f2ba769e7740a0203cfb9407c1159740f13e1ef35ad57a3115572ae34-json.log",
"Name": "/centos",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "container:6a52c72b7741fd0fbdb994c8bfb2e161fd7d00d6a72469ec5e2436104ee2e83d",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/e57543b46cfdae13f6c7705c541f01a73b868ea98bfd50d0e707a767a54a8d70-init/diff:/var/lib/docker/overlay2/29086581ca9e32cb946203f99b9831141854a8b22a59ecd79148eb7bbf43ca5d/diff:/var/lib/docker/overlay2/5d97937e774ff42c6d67fbce8ce268f5d8b517e435a077996a7d7e7807ac0a81/diff",
"MergedDir": "/var/lib/docker/overlay2/e57543b46cfdae13f6c7705c541f01a73b868ea98bfd50d0e707a767a54a8d70/merged",
"UpperDir": "/var/lib/docker/overlay2/e57543b46cfdae13f6c7705c541f01a73b868ea98bfd50d0e707a767a54a8d70/diff",
"WorkDir": "/var/lib/docker/overlay2/e57543b46cfdae13f6c7705c541f01a73b868ea98bfd50d0e707a767a54a8d70/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "6a52c72b7741",
"Domainname": "",
"User": "",
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Tty": true,
"OpenStdin": true,
"StdinOnce": true,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"bash"
],
"Image": "janzen/centos7:v1.0",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"author": "janzen<janzen.com>",
"description": "BaseImage by centos7,used repo [Base] [EPEL7] from aliyun",
"org.label-schema.build-date": "20201113",
"org.label-schema.license": "GPLv2",
"org.label-schema.name": "CentOS Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "CentOS",
"org.opencontainers.image.created": "2020-11-13 00:00:00+00:00",
"org.opencontainers.image.licenses": "GPL-2.0-only",
"org.opencontainers.image.title": "CentOS Base Image",
"org.opencontainers.image.vendor": "CentOS",
"version": "v1.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
}
]

6、none 网络模式

配置none模式创建的容器,容器进行任何网络相关配置,不会创建网卡,不会配置IP,没有路由,因此容器无法与外界进行任何通信,需要手动添加网卡,配置IP和路由,因此极少使用

none 网络模式特点

  • 使用 --network none 参数指定
  • 默认无网络功能,无法与外界通信

配置none模式容器

[root@Docker-Ubu1804-p11:~]# docker run --rm -it --network none janzen/centos7:v1.0 bash
[root@d0dfd132f35e /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
[root@d0dfd132f35e /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
[root@d0dfd132f35e /]# ip route
[root@d0dfd132f35e /]# ping 192.168.17.1
connect: Network is unreachable
[root@d0dfd132f35e /]# exit
exit
#none模式端口映射无效(-P 自动映射)
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx-none --network none -P janzen/nginx-centos7:1.20.1-v2.0
2ca8847746fbd56eff8636beca92728d66dc63df9ef44ab264357ba05a716bd4
[root@Docker-Ubu1804-p11:~]# docker port nginx-none
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx-none bash
[root@2ca8847746fb /]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:* users:(("nginx",pid=1,fd=6))
LISTEN 0 128 [::]:80 [::]:* users:(("nginx",pid=1,fd=7))
[root@2ca8847746fb /]# exit
exit #none模式端口映射无效(-p 指定映射)
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx1-none --network none -p 80:80 janzen/nginx-centos7:1.20.1-v2.0
3ad25870f6fa6e91da590a30adba8524d3a6128d5a4f5a9439e9b1925ae02d70
[root@Docker-Ubu1804-p11:~]# docker port nginx1-none
[root@Docker-Ubu1804-p11:~]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:6011 0.0.0.0:* users:(("sshd",pid=17610,fd=10))
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=905,fd=13))
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1105,fd=3))
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:* users:(("sshd",pid=1979,fd=10))
LISTEN 0 128 [::1]:6011 [::]:* users:(("sshd",pid=17610,fd=9))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1105,fd=4))
LISTEN 0 128 [::1]:6010 [::]:* users:(("sshd",pid=1979,fd=9))

7、自定义网络模式

除了内置的网络模式外,可以自定义网络模式,使用自定义的网络和网关等信息

注意:在 自定义网络内的容器之间可以直接使用容器名进行访问,而无需添加 --link 参数

使用自定义网络模式,实现不同集群应用的独立网络管理,而互不影响,而且在一个网络内可以使用容器名进行容器间的相互访问

Usage:    docker network COMMAND

Manage networks

Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks Run 'docker network COMMAND --help' for more information on a command.

创建自定义网络

Usage:    docker network create [OPTIONS] NETWORK

Create a network

Options:
--attachable Enable manual container attachment
--aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
--config-from string The network from which copying the configuration
--config-only Create a configuration only network
-d, --driver string Driver to manage the Network (default "bridge")
--gateway strings IPv4 or IPv6 Gateway for the master subnet
--ingress Create swarm routing-mesh network
--internal Restrict external access to the network
--ip-range strings Allocate container ip from a sub-range
--ipam-driver string IP Address Management Driver (default "default")
--ipam-opt map Set IPAM driver specific options (default map[])
--ipv6 Enable IPv6 networking
--label list Set metadata on a network
-o, --opt map Set driver specific options (default map[])
--scope string Control the network's scope
--subnet strings Subnet in CIDR format that represents a network segment

查看自定义网络信息

Usage:    docker network inspect [OPTIONS] NETWORK [NETWORK...]

Display detailed information on one or more networks

Options:
-f, --format string Format the output using the given Go template
-v, --verbose Verbose output for diagnostics

引用自定义网络

Usage:    docker run --network <自定义网络名称> IMAGE [COMMAND] [ARG...]

Run a command in a new container

Options:
--network network Connect a container to a network
--network-alias list Add network-scoped alias for the container

容器连接自定义网络

Usage:    docker network connect [OPTIONS] NETWORK CONTAINER

Connect a container to a network

Options:
--alias strings Add network-scoped alias for the container
--driver-opt strings driver options for the network
--ip string IPv4 address (e.g., 172.30.100.104)
--ip6 string IPv6 address (e.g., 2001:db8::33)
--link list Add link to another container
--link-local-ip strings Add a link-local address for the container
Usage:    docker network disconnect [OPTIONS] NETWORK CONTAINER

Disconnect a container from a network

Options:
-f, --force Force the container to disconnect from a network

删除自定义网络

Usage:    docker network rm NETWORK [NETWORK...]

Remove one or more networks

Aliases:
rm, remove
Usage:    docker network prune [OPTIONS]

Remove all unused networks

Options:
--filter filter Provide filter values (e.g. 'until=<timestamp>')
-f, --force Do not prompt for confirmation

创建自定义网络,提供给Redis Cluster使用

#创建 redis-bridge 自定义网络
[root@Docker-Ubu1804-p11:~]# docker network create redis-bridge --subnet 172.19.0.0/16
be4e852e66e275fe723a76a81a272c567e9a0831a5e614583b17de959c9889af
[root@Docker-Ubu1804-p11:~]# docker network inspect redis-bridge
[
{
"Name": "redis-bridge",
"Id": "be4e852e66e275fe723a76a81a272c567e9a0831a5e614583b17de959c9889af",
"Created": "2023-05-03T02:52:26.940859911+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.19.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:3a:65:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b4ff:fe3a:6543/64 scope link
valid_lft forever preferred_lft forever
30: br-be4e852e66e2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:49:4d:d5:42 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-be4e852e66e2
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
br-be4e852e66e2 8000.0242494dd542 no
docker0 8000.0242b43a6543 no
#准备redis配置文件
[root@Docker-Ubu1804-p11:~]# for port in {1..6};do
> mkdir -p /data/redis/node-${port}/conf
> cat >> /data/redis/node-${port}/conf/redis.conf << EOF
> bind 0.0.0.0
> protected-mode yes
> port 6379
> tcp-backlog 511
> timeout 0
> tcp-keepalive 300
> daemonize no
> supervised no
> loglevel notice
> databases 16
> always-show-logo no
> save 900 1
> save 300 10
> save 60 10000
> stop-writes-on-bgsave-error yes
> rdbcompression yes
> rdbchecksum yes
> masterauth redis
> requirepass redis
> appendonly yes
> cluster-enabled yes
> cluster-config-file nodes-6379.conf
> cluster-require-full-coverage yes
> EOF
> done
[root@Docker-Ubu1804-p11:~]# tree /data/redis/
/data/redis/
├── node-1
│   └── conf
│   └── redis.conf
├── node-2
│   └── conf
│   └── redis.conf
├── node-3
│   └── conf
│   └── redis.conf
├── node-4
│   └── conf
│   └── redis.conf
├── node-5
│   └── conf
│   └── redis.conf
└── node-6
└── conf
└── redis.conf 12 directories, 6 files
[root@Docker-Ubu1804-p11:~]# cat /data/redis/node-1/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
loglevel notice
databases 16
always-show-logo no
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
masterauth redis
requirepass redis
appendonly yes
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-require-full-coverage yes
#创建6个redis容器
[root@Docker-Ubu1804-p11:~]# for port in {1..6};do
docker run --name redis-${port} -p 637${port}:6379 -p 1637${port}:16379 \
-v /data/redis/node-${port}:/data \
-v /data/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --network redis-bridge --ip 172.19.0.1${port} \
redis:5.0.14-alpine3.14 /usr/local/bin/redis-server /etc/redis/redis.conf
done
24a73aa11d8f4bd1fa4417447bc4fa0521dc8792e2afea78ba5d739fe0a17879
585b4d062be0cbbba85741d0c473f24a3563667eff7d6ac7b0d8b5d93b400b13
0b6e026365b350812d1474a28405c2174e61c87381831957e3f08240eb061ee6
33774a5f6a879915d82566deb8bb8dc54a5a4955dbb9aa2c3c9e3ea01e1248eb
73a5815e188d25d7b7b7bbf232fe8275bf39b91aa63fdcced42d1c5b1f02c382
1ecb32d70066fee5bf3d4a789af9615dc7fedd4be117423e3d0306db310465cf
[root@Docker-Ubu1804-p11:~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ecb32d70066 redis:5.0.14-alpine3.14 "docker-entrypoint.s…" 4 seconds ago Up 3 seconds 0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp redis-6
73a5815e188d redis:5.0.14-alpine3.14 "docker-entrypoint.s…" 5 seconds ago Up 3 seconds 0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp redis-5
33774a5f6a87 redis:5.0.14-alpine3.14 "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp redis-4
0b6e026365b3 redis:5.0.14-alpine3.14 "docker-entrypoint.s…" 6 seconds ago Up 5 seconds 0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp redis-3
585b4d062be0 redis:5.0.14-alpine3.14 "docker-entrypoint.s…" 7 seconds ago Up 5 seconds 0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp redis-2
24a73aa11d8f redis:5.0.14-alpine3.14 "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp redis-1

#创建集群
[root@Docker-Ubu1804-p11:~]# docker exec -it redis-1 sh /data # redis-cli -a redis --cluster create 172.19.0.11:6379 172.19.0.12:6379 172.19.0.13:6379 172.19.0.14:6379 172.19.0.15:6379 172.19.0.16:637
9 --cluster-replicas 1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.19.0.15:6379 to 172.19.0.11:6379
Adding replica 172.19.0.16:6379 to 172.19.0.12:6379
Adding replica 172.19.0.14:6379 to 172.19.0.13:6379
M: f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379
slots:[0-5460] (5461 slots) master
M: 9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379
slots:[5461-10922] (5462 slots) master
M: e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379
replicates e83e6914d17a2e36fc8fdf5c840c4476215e4100
S: 73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379
replicates f578fe23248df984c233f1673e29b0cacf56beab
S: 480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379
replicates 9b3142d0e075c46cd7793deabf94438adfc38be4
Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.19.0.11:6379)
M: f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379
slots: (0 slots) slave
replicates e83e6914d17a2e36fc8fdf5c840c4476215e4100
M: e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379
slots: (0 slots) slave
replicates f578fe23248df984c233f1673e29b0cacf56beab
M: 9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379
slots: (0 slots) slave
replicates 9b3142d0e075c46cd7793deabf94438adfc38be4
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
/data #
#验证效果

[root@Docker-Ubu1804-p11:~]# docker exec -it redis-1 sh

/data # redis-cli -a redis -c
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6379> CLUSTER INFO
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:256
cluster_stats_messages_pong_sent:251
cluster_stats_messages_sent:507
cluster_stats_messages_ping_received:246
cluster_stats_messages_pong_received:256
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:507
127.0.0.1:6379> 127.0.0.1:6379> cluster nodes
82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379@16379 slave e83e6914d17a2e36fc8fdf5c840c4476215e4100 0 1683064053000 4 connected
f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379@16379 myself,master - 0 1683064053000 1 connected 0-5460
e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379@16379 master - 0 1683064054461 3 connected 10923-16383
73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379@16379 slave f578fe23248df984c233f1673e29b0cacf56beab 0 1683064052000 5 connected
9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379@16379 master - 0 1683064053000 2 connected 5461-10922
480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379@16379 slave 9b3142d0e075c46cd7793deabf94438adfc38be4 0 1683064053451 6 connected
127.0.0.1:6379> 127.0.0.1:6379> set key1 value1
-> Redirected to slot [9189] located at 172.19.0.12:6379
OK
172.19.0.12:6379> set key2 value2
-> Redirected to slot [4998] located at 172.19.0.11:6379
OK
172.19.0.11:6379> get key2
"value2"
172.19.0.11:6379>
#模拟故障转移
[root@Docker-Ubu1804-p11:~]# docker stop redis-2
redis-2
[root@Docker-Ubu1804-p11:~]# docker exec -it redis-1 sh
/data # redis-cli -a redis -c
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6379> cluster nodes
82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379@16379 slave e83e6914d17a2e36fc8fdf5c840c4476215e4100 0 1683064282186 4 connected
f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379@16379 myself,master - 0 1683064280000 1 connected 0-5460
e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379@16379 master - 0 1683064280171 3 connected 10923-16383
73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379@16379 slave f578fe23248df984c233f1673e29b0cacf56beab 0 1683064281180 5 connected
9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379@16379 master - 1683064277954 1683064275000 2 connected 5461-10922
480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379@16379 slave 9b3142d0e075c46cd7793deabf94438adfc38be4 0 1683064279164 6 connected
127.0.0.1:6379> cluster nodes
82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379@16379 slave e83e6914d17a2e36fc8fdf5c840c4476215e4100 0 1683064286000 4 connected
f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379@16379 myself,master - 0 1683064285000 1 connected 0-5460
e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379@16379 master - 0 1683064288234 3 connected 10923-16383
73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379@16379 slave f578fe23248df984c233f1673e29b0cacf56beab 0 1683064287227 5 connected
9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379@16379 master - 1683064277954 1683064275000 2 connected 5461-10922
480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379@16379 slave 9b3142d0e075c46cd7793deabf94438adfc38be4 0 1683064287000 6 connected
127.0.0.1:6379> cluster nodes
82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379@16379 slave e83e6914d17a2e36fc8fdf5c840c4476215e4100 0 1683064292262 4 connected
f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379@16379 myself,master - 0 1683064292000 1 connected 0-5460
e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379@16379 master - 0 1683064291000 3 connected 10923-16383
73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379@16379 slave f578fe23248df984c233f1673e29b0cacf56beab 0 1683064291254 5 connected
9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379@16379 master - 1683064277954 1683064275000 2 connected 5461-10922
480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379@16379 slave 9b3142d0e075c46cd7793deabf94438adfc38be4 0 1683064290246 6 connected
127.0.0.1:6379> cluster nodes
82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379@16379 slave e83e6914d17a2e36fc8fdf5c840c4476215e4100 0 1683064297000 4 connected
f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379@16379 myself,master - 0 1683064297000 1 connected 0-5460
e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379@16379 master - 0 1683064300329 3 connected 10923-16383
73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379@16379 slave f578fe23248df984c233f1673e29b0cacf56beab 0 1683064299320 5 connected
9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379@16379 master,fail - 1683064277954 1683064275000 2 connected 5461-10922
480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379@16379 slave 9b3142d0e075c46cd7793deabf94438adfc38be4 0 1683064298313 6 connected
127.0.0.1:6379> cluster nodes
480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379@16379 myself,master - 0 1683064434000 8 connected 5461-10922
f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379@16379 master - 0 1683064434000 1 connected 0-5460
9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379@16379 master,fail - 1683064277874 1683064275000 2 connected
e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379@16379 master - 0 1683064434275 3 connected 10923-16383
82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379@16379 slave e83e6914d17a2e36fc8fdf5c840c4476215e4100 0 1683064433267 4 connected
73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379@16379 slave f578fe23248df984c233f1673e29b0cacf56beab 0 1683064435283 5 connected
127.0.0.1:6379>
127.0.0.1:6379> set key12 value12
-> Redirected to slot [13976] located at 172.19.0.13:6379
OK
172.19.0.13:6379> set key13 value13
-> Redirected to slot [9913] located at 172.19.0.16:6379
OK
#模拟故障恢复
[root@Docker-Ubu1804-p11:~]# docker start redis-2
redis-2
[root@Docker-Ubu1804-p11:~]# docker exec -it redis-1 sh
/data # redis-cli -a redis -c
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6379> cluster nodes
82c804f83cf76cf4d55452fb60ce8c9f90cce032 172.19.0.14:6379@16379 slave e83e6914d17a2e36fc8fdf5c840c4476215e4100 0 1683064801225 4 connected
f578fe23248df984c233f1673e29b0cacf56beab 172.19.0.11:6379@16379 myself,master - 0 1683064802000 1 connected 0-5460
e83e6914d17a2e36fc8fdf5c840c4476215e4100 172.19.0.13:6379@16379 master - 0 1683064804246 3 connected 10923-16383
73564b278ec648e2b7f6fbb8f43e0eacc5dad117 172.19.0.15:6379@16379 slave f578fe23248df984c233f1673e29b0cacf56beab 0 1683064803239 5 connected
9b3142d0e075c46cd7793deabf94438adfc38be4 172.19.0.12:6379@16379 slave 480c27be1f17180ebaba1f6e09d69165555babc5 0 1683064802000 8 connected
480c27be1f17180ebaba1f6e09d69165555babc5 172.19.0.16:6379@16379 master - 0 1683064802233 8 connected 5461-10922
127.0.0.1:6379>

8、同一宿主机之间不同网络的容器通信

创建一个自定义网络 test-net ,使用网段192.168.17.0/24,网关 192.168.17.1/24,开启两个容器分别连接默认网络和自定义网络。由于两个虚拟网桥之间并没有配置相应的访问策略及路由,因此同一宿主机不同网络中的容器无法互相通信

[root@Docker-Ubu1804-p11:~]# docker network create test-net --subnet 192.168.17.1/24
0750c22bc20e92c96a7db7c99c1a3d4d464d7d3c7b9ccd76dc0735389646dbba
[root@Docker-Ubu1804-p11:~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
aed44038b9c9 bridge bridge local
e33dad33c534 host host local
71f677643168 none null local
0750c22bc20e test-net bridge local
[root@Docker-Ubu1804-p11:~]# docker inspect test-net
[
{
"Name": "test-net",
"Id": "0750c22bc20e92c96a7db7c99c1a3d4d464d7d3c7b9ccd76dc0735389646dbba",
"Created": "2023-05-03T17:21:51.837431752+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.17.1/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "aed44038b9c93034aa1369ed67ee185e83d36f1b0b30d5fdfae8bb0cad7bc2f6",
"Created": "2023-05-03T17:20:27.242865491+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"IPRange": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
#创建连接默认 bridge 的容器
[root@Docker-Ubu1804-p11:~]# docker run --rm -it -h centos-docker0 janzen/centos7:v1.0 bash
[root@centos-docker0 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@centos-docker0 /]# ping 192.168.17.1
PING 192.168.17.1 (192.168.17.1) 56(84) bytes of data.
64 bytes from 192.168.17.1: icmp_seq=1 ttl=64 time=0.249 ms
64 bytes from 192.168.17.1: icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from 192.168.17.1: icmp_seq=3 ttl=64 time=0.043 ms
64 bytes from 192.168.17.1: icmp_seq=4 ttl=64 time=0.041 ms
^C
--- 192.168.17.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3038ms
rtt min/avg/max/mdev = 0.041/0.093/0.249/0.090 ms
[root@centos-docker0 /]# ping 192.168.17.2
PING 192.168.17.2 (192.168.17.2) 56(84) bytes of data.
^C
--- 192.168.17.2 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6124ms #创建连接自定义网络 test-net 的容器
[root@Docker-Ubu1804-p11:~]# docker run --rm -it -h centos-test --network test-net janzen/centos7:v1.0 bash
[root@centos-test /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.2/24 brd 192.168.17.255 scope global eth0
valid_lft forever preferred_lft forever
[root@centos-test /]# ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.203 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from 172.17.0.1: icmp_seq=3 ttl=64 time=0.064 ms
64 bytes from 172.17.0.1: icmp_seq=4 ttl=64 time=0.045 ms
^C
--- 172.17.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3052ms
rtt min/avg/max/mdev = 0.044/0.089/0.203/0.066 ms
[root@centos-test /]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
^C
--- 172.17.0.2 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3079ms #查看宿主机网络状态
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:6b:1a:f7:78 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:6bff:fe1a:f778/64 scope link
valid_lft forever preferred_lft forever
5: br-0750c22bc20e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0e:25:c6:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global br-0750c22bc20e
valid_lft forever preferred_lft forever
inet6 fe80::42:eff:fe25:c614/64 scope link
valid_lft forever preferred_lft forever
11: veth3fe9ced@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether da:03:01:df:93:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::d803:1ff:fedf:936b/64 scope link
valid_lft forever preferred_lft forever
13: veth4a677fa@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0750c22bc20e state UP group default
link/ether 62:64:89:bc:71:7c brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::6064:89ff:febc:717c/64 scope link
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
br-0750c22bc20e 8000.02420e25c614 no veth4a677fa
docker0 8000.02426b1af778 no veth3fe9ced

8.1、通过 iptables规则 实现同宿主机不同网络容器通信

#查看当前宿主机网络规则
[root@Docker-Ubu1804-p11:~]# cat /proc/sys/net/ipv4/ip_forward
1
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
br-0750c22bc20e 8000.02420e25c614 no veth4a677fa
docker0 8000.02426b1af778 no veth3fe9ced
[root@Docker-Ubu1804-p11:~]# iptables -vnL
Chain INPUT (policy ACCEPT 1223 packets, 89127 bytes)
pkts bytes target prot opt in out source destination Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
11 924 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
11 924 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 957 packets, 98000 bytes)
pkts bytes target prot opt in out source destination Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
4 336 DOCKER-ISOLATION-STAGE-2 all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
7 588 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (2 references)
pkts bytes target prot opt in out source destination
7 588 DROP all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
4 336 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
11 924 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
[root@Docker-Ubu1804-p11:~]# iptables-save
# Generated by iptables-save v1.6.1 on Wed May 3 17:37:18 2023
*filter
:INPUT ACCEPT [1284:92891]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [1016:106068]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-0750c22bc20e -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-0750c22bc20e -j DOCKER
-A FORWARD -i br-0750c22bc20e ! -o br-0750c22bc20e -j ACCEPT
-A FORWARD -i br-0750c22bc20e -o br-0750c22bc20e -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-0750c22bc20e ! -o br-0750c22bc20e -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-0750c22bc20e -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed May 3 17:37:18 2023
# Generated by iptables-save v1.6.1 on Wed May 3 17:37:18 2023
*nat
:PREROUTING ACCEPT [15:1399]
:INPUT ACCEPT [4:475]
:OUTPUT ACCEPT [15:1140]
:POSTROUTING ACCEPT [15:1140]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.17.0/24 ! -o br-0750c22bc20e -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i br-0750c22bc20e -j RETURN
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Wed May 3 17:37:18 2023 #修改iptables策略
##方法一:使用导出规则,修改后重新导入
[root@Docker-Ubu1804-p11:~]# iptables-save > iptables.rule
[root@Docker-Ubu1804-p11:~]# sed -i.bak -e "/-A DOCKER-ISOLATION-STAGE-2 -o br-0750c22bc20e -j DROP/c -A DOCKER-ISOLATION-STAGE-2 -o br-0750c22bc20e -j ACCEPT" -e "/-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP/c -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j ACCEPT" iptables.rule
[root@Docker-Ubu1804-p11:~]# cat iptables.rule
# Generated by iptables-save v1.6.1 on Wed May 3 17:38:47 2023
*filter
:INPUT ACCEPT [1336:96193]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [1055:111456]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-0750c22bc20e -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-0750c22bc20e -j DOCKER
-A FORWARD -i br-0750c22bc20e ! -o br-0750c22bc20e -j ACCEPT
-A FORWARD -i br-0750c22bc20e -o br-0750c22bc20e -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-0750c22bc20e ! -o br-0750c22bc20e -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-0750c22bc20e -j ACCEPT
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed May 3 17:38:47 2023
# Generated by iptables-save v1.6.1 on Wed May 3 17:38:47 2023
*nat
:PREROUTING ACCEPT [16:1477]
:INPUT ACCEPT [5:553]
:OUTPUT ACCEPT [15:1140]
:POSTROUTING ACCEPT [15:1140]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.17.0/24 ! -o br-0750c22bc20e -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i br-0750c22bc20e -j RETURN
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Wed May 3 17:38:47 2023
[root@Docker-Ubu1804-p11:~]# iptables-restore < iptables.rule
[root@Docker-Ubu1804-p11:~]# iptables -vnL
Chain INPUT (policy ACCEPT 24 packets, 1584 bytes)
pkts bytes target prot opt in out source destination Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 14 packets, 1352 bytes)
pkts bytes target prot opt in out source destination Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (2 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 ##方法二:使用命令模式修改iptables配置
[root@Docker-Ubu1804-p11:~]# iptables -I DOCKER-ISOLATION-STAGE-2 -j ACCEPT
[root@Docker-Ubu1804-p11:~]# iptables -vnL
Chain INPUT (policy ACCEPT 6 packets, 428 bytes)
pkts bytes target prot opt in out source destination Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 3 packets, 420 bytes)
pkts bytes target prot opt in out source destination Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (2 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
[root@Docker-Ubu1804-p11:~]# iptables -vnL
Chain INPUT (policy ACCEPT 29 packets, 1864 bytes)
pkts bytes target prot opt in out source destination Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
16 1344 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
16 1344 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-0750c22bc20e br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 20 packets, 5784 bytes)
pkts bytes target prot opt in out source destination Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
8 672 DOCKER-ISOLATION-STAGE-2 all -- br-0750c22bc20e !br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
8 672 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (2 references)
pkts bytes target prot opt in out source destination
16 1344 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-0750c22bc20e 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
16 1344 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
#容器centos-docker0
[root@centos-docker0 /]# ping 192.168.17.2
PING 192.168.17.2 (192.168.17.2) 56(84) bytes of data.
64 bytes from 192.168.17.2: icmp_seq=1 ttl=63 time=0.097 ms
64 bytes from 192.168.17.2: icmp_seq=2 ttl=63 time=0.054 ms
64 bytes from 192.168.17.2: icmp_seq=3 ttl=63 time=0.054 ms
64 bytes from 192.168.17.2: icmp_seq=4 ttl=63 time=0.057 ms
^C
--- 192.168.17.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3065ms
rtt min/avg/max/mdev = 0.054/0.065/0.097/0.019 ms #容器centos-test
[root@centos-test /]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=63 time=0.051 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=63 time=0.068 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=63 time=0.140 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=63 time=0.053 ms
^C
--- 172.17.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3075ms
rtt min/avg/max/mdev = 0.051/0.078/0.140/0.036 ms

8.2、通过 docker network connect 实现同宿主机不同容器网络通信

可以使用 docker network connect 将容器连接至其他网络中,实现同宿主机跨网络容器通信

#将容器接入指定网络中,此时该容器可与目标网络中的容器通信
Usage: docker network connect [OPTIONS] NETWORK CONTAINER Connect a container to a network Options:
--alias strings Add network-scoped alias for the container
--driver-opt strings driver options for the network
--ip string IPv4 address (e.g., 172.30.100.104)
--ip6 string IPv6 address (e.g., 2001:db8::33)
--link list Add link to another container
--link-local-ip strings Add a link-local address for the container #断开容器已接入的指定网络中,此时该容器与目标网络中的容器无法通信
Usage: docker network disconnect [OPTIONS] NETWORK CONTAINER Disconnect a container from a network Options:
-f, --force Force the container to disconnect from a network
#查看当前docker网络信息
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "aed44038b9c93034aa1369ed67ee185e83d36f1b0b30d5fdfae8bb0cad7bc2f6",
"Created": "2023-05-03T17:20:27.242865491+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"IPRange": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"20461eb1dd605eae1556b79293315715443171e4d9d09b7fe161484943be756d": {
"Name": "brave_nash",
"EndpointID": "9c1b08da3cb9f0b0afcd79bf725a8682295f612b8c475b9d7b0071ab46138323",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# docker inspect test-net
[
{
"Name": "test-net",
"Id": "0750c22bc20e92c96a7db7c99c1a3d4d464d7d3c7b9ccd76dc0735389646dbba",
"Created": "2023-05-03T17:21:51.837431752+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.17.1/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"e806c605a0359251b2f1a77e7dcf9207d0669947f91825826a3ef4e196a7ee68": {
"Name": "interesting_clarke",
"EndpointID": "c23bf545016fe4f9b5602516da617ba210a7faf345ac9f6f56dee0c3ecd48b90",
"MacAddress": "02:42:c0:a8:11:02",
"IPv4Address": "192.168.17.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
#将容器 centos-docker0 加入网络test-net
[root@Docker-Ubu1804-p11:~]# docker network connect test-net 20461eb1dd60
[root@Docker-Ubu1804-p11:~]# docker inspect test-net
[
{
"Name": "test-net",
"Id": "0750c22bc20e92c96a7db7c99c1a3d4d464d7d3c7b9ccd76dc0735389646dbba",
"Created": "2023-05-03T17:21:51.837431752+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.17.1/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"20461eb1dd605eae1556b79293315715443171e4d9d09b7fe161484943be756d": {
"Name": "brave_nash",
"EndpointID": "dff4ba125f1ff51830bc575bcac51850170109223678144b077b319ebeca9c43",
"MacAddress": "02:42:c0:a8:11:03",
"IPv4Address": "192.168.17.3/24",
"IPv6Address": ""
},
"e806c605a0359251b2f1a77e7dcf9207d0669947f91825826a3ef4e196a7ee68": {
"Name": "interesting_clarke",
"EndpointID": "c23bf545016fe4f9b5602516da617ba210a7faf345ac9f6f56dee0c3ecd48b90",
"MacAddress": "02:42:c0:a8:11:02",
"IPv4Address": "192.168.17.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# #容器centos-docker0 中新增一张 192.168.18.3 网卡
[root@centos-docker0 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
14: eth1@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.3/24 brd 192.168.17.255 scope global eth1
valid_lft forever preferred_lft forever
[root@centos-docker0 /]# ping 192.168.17.2
PING 192.168.17.2 (192.168.17.2) 56(84) bytes of data.
64 bytes from 192.168.17.2: icmp_seq=1 ttl=64 time=0.331 ms
64 bytes from 192.168.17.2: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 192.168.17.2: icmp_seq=3 ttl=64 time=0.051 ms
64 bytes from 192.168.17.2: icmp_seq=4 ttl=64 time=0.102 ms
^C
--- 192.168.17.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3066ms
rtt min/avg/max/mdev = 0.048/0.133/0.331/0.116 ms #容器centos-test 没有发生变化,依旧无法访问 172.17.0.2
[root@centos-test /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.2/24 brd 192.168.17.255 scope global eth0
valid_lft forever preferred_lft forever
[root@centos-test /]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
^C
--- 172.17.0.2 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4082ms
#将容器 centos-test 接入bridge网络
[root@Docker-Ubu1804-p11:~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e806c605a035 janzen/centos7:v1.0 "bash" 51 minutes ago Up 51 minutes interesting_clarke
20461eb1dd60 janzen/centos7:v1.0 "bash" 52 minutes ago Up 52 minutes brave_nash
[root@Docker-Ubu1804-p11:~]# docker network connect bridge e806c605a035
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "aed44038b9c93034aa1369ed67ee185e83d36f1b0b30d5fdfae8bb0cad7bc2f6",
"Created": "2023-05-03T17:20:27.242865491+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"IPRange": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"20461eb1dd605eae1556b79293315715443171e4d9d09b7fe161484943be756d": {
"Name": "brave_nash",
"EndpointID": "9c1b08da3cb9f0b0afcd79bf725a8682295f612b8c475b9d7b0071ab46138323",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"e806c605a0359251b2f1a77e7dcf9207d0669947f91825826a3ef4e196a7ee68": {
"Name": "interesting_clarke",
"EndpointID": "0ea73b1e57c6ce86b4b1a095510a4822e0fd5e2726fee669c839c7d8a2682e53",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# #容器centos-test生成 172.17.0.3 网卡
[root@centos-test /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.2/24 brd 192.168.17.255 scope global eth0
valid_lft forever preferred_lft forever
16: eth1@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth1
valid_lft forever preferred_lft forever
[root@centos-test /]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.053 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.054 ms
^C
--- 172.17.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3067ms
rtt min/avg/max/mdev = 0.049/0.069/0.120/0.029 ms
#此时宿主机网络
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:6b:1a:f7:78 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:6bff:fe1a:f778/64 scope link
valid_lft forever preferred_lft forever
5: br-0750c22bc20e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0e:25:c6:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global br-0750c22bc20e
valid_lft forever preferred_lft forever
inet6 fe80::42:eff:fe25:c614/64 scope link
valid_lft forever preferred_lft forever
11: veth3fe9ced@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether da:03:01:df:93:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::d803:1ff:fedf:936b/64 scope link
valid_lft forever preferred_lft forever
13: veth4a677fa@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0750c22bc20e state UP group default
link/ether 62:64:89:bc:71:7c brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::6064:89ff:febc:717c/64 scope link
valid_lft forever preferred_lft forever
15: vethe93068a@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0750c22bc20e state UP group default
link/ether 92:23:14:f4:3d:fe brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::9023:14ff:fef4:3dfe/64 scope link
valid_lft forever preferred_lft forever
17: vethd761ab8@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 82:a5:c9:f7:c4:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::80a5:c9ff:fef7:c4c5/64 scope link
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
br-0750c22bc20e 8000.02420e25c614 no veth4a677fa
vethe93068a
docker0 8000.02426b1af778 no veth3fe9ced
vethd761ab8
#断开容器与网络连接
[root@Docker-Ubu1804-p11:~]# docker network disconnect test-net 20461eb1dd60
[root@Docker-Ubu1804-p11:~]# docker network disconnect bridge e806c605a035
[root@Docker-Ubu1804-p11:~]# docker inspect test-net
[
{
"Name": "test-net",
"Id": "0750c22bc20e92c96a7db7c99c1a3d4d464d7d3c7b9ccd76dc0735389646dbba",
"Created": "2023-05-03T17:21:51.837431752+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.17.1/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"e806c605a0359251b2f1a77e7dcf9207d0669947f91825826a3ef4e196a7ee68": {
"Name": "interesting_clarke",
"EndpointID": "c23bf545016fe4f9b5602516da617ba210a7faf345ac9f6f56dee0c3ecd48b90",
"MacAddress": "02:42:c0:a8:11:02",
"IPv4Address": "192.168.17.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "aed44038b9c93034aa1369ed67ee185e83d36f1b0b30d5fdfae8bb0cad7bc2f6",
"Created": "2023-05-03T17:20:27.242865491+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"IPRange": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"20461eb1dd605eae1556b79293315715443171e4d9d09b7fe161484943be756d": {
"Name": "brave_nash",
"EndpointID": "9c1b08da3cb9f0b0afcd79bf725a8682295f612b8c475b9d7b0071ab46138323",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
[root@Docker-Ubu1804-p11:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:ff:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:ff18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:6b:1a:f7:78 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:6bff:fe1a:f778/64 scope link
valid_lft forever preferred_lft forever
5: br-0750c22bc20e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0e:25:c6:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.1/24 brd 192.168.17.255 scope global br-0750c22bc20e
valid_lft forever preferred_lft forever
inet6 fe80::42:eff:fe25:c614/64 scope link
valid_lft forever preferred_lft forever
11: veth3fe9ced@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether da:03:01:df:93:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::d803:1ff:fedf:936b/64 scope link
valid_lft forever preferred_lft forever
13: veth4a677fa@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0750c22bc20e state UP group default
link/ether 62:64:89:bc:71:7c brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::6064:89ff:febc:717c/64 scope link
valid_lft forever preferred_lft forever
[root@Docker-Ubu1804-p11:~]# brctl show
bridge name bridge id STP enabled interfaces
br-0750c22bc20e 8000.02420e25c614 no veth4a677fa
docker0 8000.02426b1af778 no veth3fe9ced #容器centos-docker
[root@centos-docker0 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@centos-docker0 /]# ping 192.168.17.2
PING 192.168.17.2 (192.168.17.2) 56(84) bytes of data.
^C
--- 192.168.17.2 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1013ms [root@centos-docker0 /]# #容器centos-test
[root@centos-test /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:11:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.17.2/24 brd 192.168.17.255 scope global eth0
valid_lft forever preferred_lft forever
[root@centos-test /]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1019ms [root@centos-test /]#

四、跨宿主机容器互联

1、桥接模式实现跨宿主机容器通信

桥接模式是将宿主机网卡添加至 docker0 网桥中,实现宿主机

#安装 bridge-utils 工具
apt install -y bridge-utils

#将宿主机网卡ens33添加至docker0网桥中(此操作会导致宿主机原IP地址不可用)
brctl addif docker0 ens33
#将宿主机网卡ens33移出docker0网桥
brctl delif docker0 ens33

2、NAT模式实现跨宿主机容器通信

实现原理是在宿主机上配置到对端容器网段的路由及iptables规则,适用小型环境内,大型环境请使用k8s

2.1、修改宿主机Docker默认网段

##修改ubuntu宿主机Docker默认网段配置
[root@Docker-Ubu1804-p11:~]# vim /etc/docker/daemon.json
[root@Docker-Ubu1804-p11:~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://hub-mirror.c.163.com","https://po3g231a.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn"],
"bip": "172.17.1.1/24",
"fixed-cidr": "172.17.1.0/24"
}
[root@Docker-Ubu1804-p11:~]# systemctl restart docker
[root@Docker-Ubu1804-p11:~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "8c663d87bd5010eefaad6d49c5b21d4d4da4032a3e68a6ee67a319323a979739",
"Created": "2023-05-03T21:22:37.077141317+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.1.0/24",
"IPRange": "172.17.1.0/24",
"Gateway": "172.17.1.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
] ##修改centos宿主机Docker默认网络配置
[root@Template-CentOS7-7 ~]# vim /etc/docker/daemon.json
[root@Template-CentOS7-7 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://hub-mirror.c.163.com","https://po3g231a.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn"],
"bip": "172.17.2.1/24",
"fixed-cidr": "172.17.2.0/24"
}
[root@Template-CentOS7-7 ~]# systemctl restart docker
[root@Template-CentOS7-7 ~]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "4580d8525f9457ccc157fd1d12bd9fc234712eedc5c4b4fea51867131e69ac6f",
"Created": "2023-05-03T21:30:17.34039947+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.2.0/24",
"IPRange": "172.17.2.0/24",
"Gateway": "172.17.2.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

2.2、在宿主机上运行容器

##分别在两台宿主机上运行容器
[root@Docker-Ubu1804-p11:~]# docker run -d --name nginx-1 janzen/nginx-centos7:1.20.1-v2.0
5bef36b85ba92a06992060169632641a65bee347eabe98ce138b6377fd4b62dd
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx-1 bash
[root@5bef36b85ba9 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.1.2/24 brd 172.17.1.255 scope global eth0
valid_lft forever preferred_lft forever
[root@5bef36b85ba9 /]# ping 172.17.2.2
PING 172.17.2.2 (172.17.2.2) 56(84) bytes of data.
^C
--- 172.17.2.2 ping statistics ---
12 packets transmitted, 0 received, 100% packet loss, time 11269ms [root@Template-CentOS7-7 ~]# docker run -d --name nginx-2 janzen/nginx-centos7:1.20.1-v2.0
ed9405026cba8e8723b61a58e04a2bbeb3499ca5dfbd492d75cd3a7c4562cca0
[root@Template-CentOS7-7 ~]# docker exec -it nginx-2 bash
[root@ed9405026cba /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.2.2/24 brd 172.17.2.255 scope global eth0
valid_lft forever preferred_lft forever
[root@ed9405026cba /]# ping 172.17.1.2
PING 172.17.1.2 (172.17.1.2) 56(84) bytes of data.
^C
--- 172.17.1.2 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6002ms

2.3、在宿主机上配置 路由和iptables策略

##在Ubuntu宿主机上配置route 和 iptables 规则
[root@Docker-Ubu1804-p11:~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.254 0.0.0.0 UG 0 0 0 ens33
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 ens33
172.17.1.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
[root@Docker-Ubu1804-p11:~]# route add -net 172.17.2.0/24 gw 10.0.0.7 [root@Docker-Ubu1804-p11:~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.254 0.0.0.0 UG 0 0 0 ens33
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 ens33
172.17.1.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
172.17.2.0 10.0.0.7 255.255.255.0 UG 0 0 0 ens33
[root@Docker-Ubu1804-p11:~]# iptables-save
# Generated by iptables-save v1.6.1 on Wed May 3 21:41:08 2023
*filter
:INPUT ACCEPT [499:33958]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [335:36456]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed May 3 21:41:08 2023
# Generated by iptables-save v1.6.1 on Wed May 3 21:41:08 2023
*nat
:PREROUTING ACCEPT [5:698]
:INPUT ACCEPT [4:614]
:OUTPUT ACCEPT [3:228]
:POSTROUTING ACCEPT [3:228]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.1.0/24 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Wed May 3 21:41:08 2023
[root@Docker-Ubu1804-p11:~]# iptables -A FORWARD -s 10.0.0.0/8 -j ACCEPT
[root@Docker-Ubu1804-p11:~]# iptables-save
# Generated by iptables-save v1.6.1 on Wed May 3 21:42:18 2023
*filter
:INPUT ACCEPT [6:428]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [4:512]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -s 10.0.0.0/8 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed May 3 21:42:18 2023
# Generated by iptables-save v1.6.1 on Wed May 3 21:42:18 2023
*nat
:PREROUTING ACCEPT [5:698]
:INPUT ACCEPT [4:614]
:OUTPUT ACCEPT [3:228]
:POSTROUTING ACCEPT [3:228]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.1.0/24 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Wed May 3 21:42:18 2023
##在Centos宿主机上配置路由和iptables策略
[root@Template-CentOS7-7 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.254 0.0.0.0 UG 100 0 0 ens33
10.0.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
172.17.2.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
[root@Template-CentOS7-7 ~]# route add -net 172.17.1.0/24 gw 10.0.0.11
[root@Template-CentOS7-7 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.254 0.0.0.0 UG 100 0 0 ens33
10.0.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
172.17.1.0 10.0.0.11 255.255.255.0 UG 0 0 0 ens33
172.17.2.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
[root@Template-CentOS7-7 ~]# iptables-save
# Generated by iptables-save v1.4.21 on Wed May 3 21:48:09 2023
*filter
:INPUT ACCEPT [898:1198629]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [655:77221]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed May 3 21:48:09 2023
# Generated by iptables-save v1.4.21 on Wed May 3 21:48:09 2023
*nat
:PREROUTING ACCEPT [3:391]
:INPUT ACCEPT [2:307]
:OUTPUT ACCEPT [8:552]
:POSTROUTING ACCEPT [8:552]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.2.0/24 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Wed May 3 21:48:09 2023
[root@Template-CentOS7-7 ~]# iptables -A FORWARD -s 10.0.0.0/8 -j ACCEPT
[root@Template-CentOS7-7 ~]# iptables-save
# Generated by iptables-save v1.4.21 on Wed May 3 21:48:49 2023
*filter
:INPUT ACCEPT [6:428]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [3:452]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -s 10.0.0.0/8 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed May 3 21:48:49 2023
# Generated by iptables-save v1.4.21 on Wed May 3 21:48:49 2023
*nat
:PREROUTING ACCEPT [3:391]
:INPUT ACCEPT [2:307]
:OUTPUT ACCEPT [8:552]
:POSTROUTING ACCEPT [8:552]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.2.0/24 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Wed May 3 21:48:49 2023

2.4、跨宿主机容器通信验证

#进入Ubuntu上的容器验证跨宿主机容器通信
[root@Docker-Ubu1804-p11:~]# docker exec -it nginx-1 bash
[root@5bef36b85ba9 /]# ping -c5 172.17.2.2
PING 172.17.2.2 (172.17.2.2) 56(84) bytes of data.
64 bytes from 172.17.2.2: icmp_seq=1 ttl=62 time=3.06 ms
64 bytes from 172.17.2.2: icmp_seq=2 ttl=62 time=0.878 ms
64 bytes from 172.17.2.2: icmp_seq=3 ttl=62 time=2.96 ms
64 bytes from 172.17.2.2: icmp_seq=4 ttl=62 time=1.08 ms
64 bytes from 172.17.2.2: icmp_seq=5 ttl=62 time=1.44 ms --- 172.17.2.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 0.878/1.887/3.066/0.940 ms
[root@5bef36b85ba9 /]# #进入Centos上的容器验证跨宿主机容器通信
[root@Template-CentOS7-7 ~]# docker exec -it nginx-2 bash
[root@ed9405026cba /]# ping -c5 172.17.1.2
PING 172.17.1.2 (172.17.1.2) 56(84) bytes of data.
64 bytes from 172.17.1.2: icmp_seq=1 ttl=62 time=0.630 ms
64 bytes from 172.17.1.2: icmp_seq=2 ttl=62 time=0.482 ms
64 bytes from 172.17.1.2: icmp_seq=3 ttl=62 time=1.02 ms
64 bytes from 172.17.1.2: icmp_seq=4 ttl=62 time=0.514 ms
64 bytes from 172.17.1.2: icmp_seq=5 ttl=62 time=0.652 ms --- 172.17.1.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 0.482/0.660/1.023/0.193 ms
[root@ed9405026cba /]#

使用tcpdump观察数据包

#使用tcpdump观察其他宿主机容器ping本机上的容器时的数据包
[root@Docker-Ubu1804-p11:~]# tcpdump -i ens33 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
21:55:29.872379 IP 10.0.0.7 > 172.17.1.2: ICMP echo request, id 49, seq 1, length 64
21:55:29.872628 IP 172.17.1.2 > 10.0.0.7: ICMP echo reply, id 49, seq 1, length 64
21:55:30.873955 IP 10.0.0.7 > 172.17.1.2: ICMP echo request, id 49, seq 2, length 64
21:55:30.874150 IP 172.17.1.2 > 10.0.0.7: ICMP echo reply, id 49, seq 2, length 64
21:55:31.876447 IP 10.0.0.7 > 172.17.1.2: ICMP echo request, id 49, seq 3, length 64
21:55:31.876895 IP 172.17.1.2 > 10.0.0.7: ICMP echo reply, id 49, seq 3, length 64
21:55:32.878403 IP 10.0.0.7 > 172.17.1.2: ICMP echo request, id 49, seq 4, length 64
21:55:32.878568 IP 172.17.1.2 > 10.0.0.7: ICMP echo reply, id 49, seq 4, length 64
21:55:33.880535 IP 10.0.0.7 > 172.17.1.2: ICMP echo request, id 49, seq 5, length 64
21:55:33.880691 IP 172.17.1.2 > 10.0.0.7: ICMP echo reply, id 49, seq 5, length 64 [root@Template-CentOS7-7 ~]# tcpdump -i ens33 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
21:57:49.861582 IP 10.0.0.11 > 172.17.2.2: ICMP echo request, id 86, seq 1, length 64
21:57:49.861766 IP 172.17.2.2 > 10.0.0.11: ICMP echo reply, id 86, seq 1, length 64
21:57:50.864162 IP 10.0.0.11 > 172.17.2.2: ICMP echo request, id 86, seq 2, length 64
21:57:50.864351 IP 172.17.2.2 > 10.0.0.11: ICMP echo reply, id 86, seq 2, length 64
21:57:51.865893 IP 10.0.0.11 > 172.17.2.2: ICMP echo request, id 86, seq 3, length 64
21:57:51.865991 IP 172.17.2.2 > 10.0.0.11: ICMP echo reply, id 86, seq 3, length 64
21:57:52.886136 IP 10.0.0.11 > 172.17.2.2: ICMP echo request, id 86, seq 4, length 64
21:57:52.886323 IP 172.17.2.2 > 10.0.0.11: ICMP echo reply, id 86, seq 4, length 64
21:57:53.888046 IP 10.0.0.11 > 172.17.2.2: ICMP echo request, id 86, seq 5, length 64
21:57:53.888166 IP 172.17.2.2 > 10.0.0.11: ICMP echo reply, id 86, seq 5, length 64

【Docker】网络管理的更多相关文章

  1. Docker网络管理机制实例解析+创建自己Docker网络

    实例解析Docker网络管理机制(bridge network,overlay network),介绍Docker默认的网络方式,并创建自己的网络桥接方式,将开发的容器添加至自己新建的网络,提高Doc ...

  2. docker 实践十:docker 网络管理

    本篇是关于 docker 网络管理的内容,同时也包含了 docker 网络的高级应用. 注:环境为 CentOS7,docker 19.03. docker 网络基础 docker 网络模型 在 do ...

  3. Docker 网络管理及容器跨主机通信

    1.网络模式 docker支持四种网络模式,使用--net选项指定: host,--net=host,如果指定此模式,容器将不会获得一个独立的network namespace,而是和宿主机共用一个. ...

  4. Docker网络管理

    一.Docker的四种网络模式(host.container.none.bridge) 1. host模式,使用docker run时使用--net=host指定,docker使用的网络实际上和宿主机 ...

  5. Docker网络管理-外部访问容器

    注意:这里使用的方法是端口映射,需要说明的是端口映射是在容器启动的时候才能完成端口映射的. 1,搭建1个web服务器,让外部机器访问. docker run -itd centos /bin/bash ...

  6. Docker系统六:Docker网络管理

    Docker网络 I. Docer的通信方式 默认情况下,Docker使用网桥(brige)+ NAT的通信模型. Docker启动时会自动创建网桥Docker0,并配置ip 172.17.0.1/1 ...

  7. 006.Docker网络管理

    一 docker网络模式 Docker使用Linux的Namespaces技术来进行资源隔离,如PID Namespace隔离进程,Mount Namespace隔离文件系统,Network Name ...

  8. Docker 网络管理

    网络模式 容器网络访问原理 桥接宿主机网络和配置固定IP地址 Docker 支持五种网络模式: 1.网络模式: --net  1.1 bridge  默认网络,Docker启动后默认创建一个docke ...

  9. ubuntu-docker入门到放弃(五)docker网络管理

    查看docker宿主机的网卡信息我们会发现,有一个docker0的网卡,这个网卡就是用于跟docker容器进行通讯的,这个网段跟我们docker容器的网段是一样的: #ifconfig docker容 ...

  10. Docker 核心技术之网络管理

    为什么需要Docker网络管理 容器的网络默认与宿主机.与其他容器都是相互隔离. 容器中可以运行一些网络应用(如nginx.web应用.数据库等),如果要让外部也可以访问这些容器内运行的网络应用,那么 ...

随机推荐

  1. HBase启动HMaster闪退

    1.问题描述 (1)HBase启动 [Hadoop@master conf]$ start-hbase.sh SLF4J: Class path contains multiple SLF4J bin ...

  2. Swust OJ977: 统计利用先序遍历创建的二叉树中的空链域个数

    题目描述 利用先序递归遍历算法创建二叉树并计算该二叉树中的空链域个数. 输入 输入为接受键盘输入的由大写英文字符和"#"字符构成的一个字符串(用于创建对应的二叉树). 输出 输出该 ...

  3. PO、VO、DAO、BO、DTO、POJO 之间的区别

    PO(Persistant Object),持久对象 这个对象是与数据库中的表相映射的Java对象. VO(Value Object),值对象 通常用于业务层之间的数据传递,和PO一样也是仅仅包含数据 ...

  4. 初识C 语言

    程序语言 C语言是目前极为流行的一种计算机程序设计语言,它既具有高级语言的功能,又具有汇编语言的一些特性.支持ANSIC. C语言的特点:通用性及易写易读 是一种结构化程序设计语言   具有良好的可移 ...

  5. 一次.net code中的placeholder导致的高cpu诊断

    背景 最近一位朋友找到我,让我帮看他们的一个aspnet core service无端cpu高的问题.从描述上看,这个service之前没有出现过cpu高的情况,最近也没有改过实际的什么code.很奇 ...

  6. 主板芯片组驱动和Win系统版本互相关联

    主板芯片组驱动和Win系统版本互相关联,过早的系统安装较新版的芯片组驱动,或者较新版本的操作系统安装旧版的芯片组驱动,都可能导致系统不稳定蓝屏.解决方案就是安装最新的芯片组驱动和最新版的操作系统.

  7. ArgoCD实践之基于配置清单创建Application

    1. 什么是Application ArgoCD的两个核心概念为Application和Project,他们可分别基于Application CRD和AppProject CRD创建; 核心组件: A ...

  8. vue中的v-model 与 .sync

    <input v-model="parentData"> //等同于 <input :value="parentData" @input=&q ...

  9. cron语句

    名称 是否必须 允许值 特殊字符 秒 是 0-59 , - * / 分 是 0-59 , - * / 时 是 0-23 , - * / 日 是 1-31 , - * ? / L W C 月 是 1-1 ...

  10. 【深入浅出 Yarn 架构与实现】6-1 NodeManager 功能概述

    本节开始将对 Yarn 中的 NodeManager 服务进行剖析. NodeManager 需要在每个计算节点上运行,与 ResourceManager 和 ApplicationMaster 进行 ...