生产环境,假设我们的openstack是公有云,我们一般的linuxbridge结合vlan的模式相对于大量的用户来说是vlan是不够用的,于是我们引进vxlan技术解决云主机内网网络通讯的问题。
我们的物理服务器一般有4个网络网卡,一个是远控卡,一个是管理网卡(物理机之间相互通讯和管理使用),一个用于云主机外网通讯(交换机与其对接是trunk口,云主机通过物理机的vlan与不同外网对接),最后一个是云主机内网通讯使用(交换机与其对接是access口,并且配置好IP好被vxlan调用)。
本文是整个按照neutron网络开始写的文章,如果以前只是使用linuxbridge结合vlan的模式,其实只要在其基础上稍加修改配置文件,并重启网络服务就好。需要修改的配置文件如下:
控制节点:
/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
重启服务
# systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
计算节点:
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
重启服务
# systemctl restart neutron-linuxbridge-agent.service
 
实验环境:
eth0:10.30.1.208 eth1:无IP地址 eth2:192.168.248.1 node1 控制节点
eth0:10.30.1.203 eth1:无IP地址 eth2:192.168.248.3 node3 计算节点
eth0:10.30.1.204 eth1:无IP地址 eth2:192.168.248.4 node4 计算节点 (本文没有显示其配置,其实和node3配置基本相同)
 
配置网络选项
 
Neutron在控制节点部署  node1
 
[root@node1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
Neutron在计算节点中的部署  node3
[root@node3 ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset
 
Neutron控制节点配置  node1
 
# grep -v "^#\|^$" /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =  neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[agent]
[cors]
[database]
[keystone_authtoken]
memcached_servers = 10.30.1.208:11211
project_domain_name = Default
project_name = service
user_domain_name = Default
password = neutron
username = neutron
auth_type = password
[matchmaker_redis]
[nova]
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
quota_network = 200
quota_subnet = 200
quota_port = 5000
quota_driver = neutron.db.quota.driver.DbQuotaDriver
quota_router = 100
quota_floatingip = 1000
quota_security_group = 100
quota_security_group_rule = 1000
[ssl]
 
配置 Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
编辑 /etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:
[root@node1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = external
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = default:1:4000
[ml2_type_vxlan]
vni_ranges = 1001:2000
[securitygroup]
enable_ipset = true
 
配置Linuxbridge代理
[root@node1 ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = default:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.248.1
 
配置DHCP代理
The DHCP agent provides DHCP services for virtual networks.
编辑/etc/neutron/dhcp_agent.ini文件并完成下面的操作:
在``[DEFAULT]``部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据
[root@node1 ~]# grep -v "^#\|^$" /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[ovs]
 
 
配置元数据代理
The :term:`metadata agent <Metadata agent>`负责提供配置信息,例如:访问实例的凭证
编辑``/etc/neutron/metadata_agent.ini``文件并完成以下操作:
在``[DEFAULT]`` 部分,配置元数据主机以及共享密码:
# grep -v '^#\|^$' /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = 10.30.1.208
metadata_proxy_shared_secret = syscloud.cn
[agent]
[cache]
配置l3
# grep -v '^#\|^$' /etc/neutron/l3_agent.ini
[DEFAULT]
ovs_use_veth = False
interface_driver = linuxbridge
#interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
debug = True
[agent]
[ovs]
 
 
为控制节点的计算服务nova配置网络服务
编辑``/etc/nova/nova.conf``文件并完成以下操作:
在``[neutron]``部分,配置访问参数,启用元数据代理并设置密码:
[DEFAULT]
cpu_allocation_ratio=8
ram_allocation_ratio=2
disk_allocation_ratio=2
resume_guests_state_on_host_boot=true
reserved_host_disk_mb=20480
baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
transport_url = rabbit://openstack:openstack@10.30.1.208
auth_strategy = keystone
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
[api]
[api_database]
connection = mysql+pymysql://nova:nova@10.30.1.208/nova_api
[barbican]
[cache]
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://nova:nova@10.30.1.208/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.30.1.208:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.30.1.208:5000
auth_url = http://10.30.1.208:35357
memcached_servers = 10.30.1.208:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://10.30.1.208:9696
auth_url = http://10.30.1.208:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = syscloud.cn
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.30.1.208:35357/v3
username = placement
password = placement
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled  =  true
server_listen = 0.0.0.0
server_proxyclient_address = 10.30.1.208
[workarounds]
[wsgi]
[xenserver]
[xvp]
 
完成安装
网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超链接不存在,使用下面的命令创建它:
[root@node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库:
[root@node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
注解
数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件。
重启计算API 服务:
[root@node1 ~]# systemctl restart openstack-nova-api.service
当系统启动时,启动 Networking 服务并配置它启动。
对于两种网络选项:
[root@node1 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
[root@node1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
 
检验nentron在控制节点是否OK
[root@node1 ~]# openstack network agent list
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |
| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
 
终极检验示范:
[root@node1 ~]# openstack extension list --network
 
 
Neutron计算节点配置  node3
neutron计算节点:(将neutron的配置文件拷贝到计算节点)
编辑/etc/neutron/neutron.conf文件并完成以下操作:
# grep -v '^#\|^$' /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
[agent]
[cors]
[database]
[keystone_authtoken]
memcached_servers = 10.30.1.208:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[quotas]
quota_network = 200
quota_subnet = 200
quota_port = 5000
quota_driver = neutron.db.quota.driver.DbQuotaDriver
quota_router = 100
quota_floatingip = 50
quota_security_group = 100
quota_security_group_rule = 1000
[ssl]
配置网络选项
选择与您之前在控制节点上选择的相同的网络选项。之后,回到这里并进行下一步:为计算节点配置网络服务。
配置Linux网桥代理
Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组。
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
[root@node3 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = default:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.248.3
为计算节点配置网络服务
编辑/etc/nova/nova.conf文件并完成下面的操作:
[DEFAULT]
cpu_allocation_ratio=8
ram_allocation_ratio=2
disk_allocation_ratio=2
resume_guests_state_on_host_boot=true
reserved_host_disk_mb=20480
baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@10.30.1.208
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.30.1.208:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.30.1.208:5000
auth_url = http://10.30.1.208:35357
memcached_servers = 10.30.1.208:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 29355b97-1fd8-4135-a26e-d7efeaa27b0a
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://10.30.1.208:9696
auth_url = http://10.30.1.208:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.30.1.208:35357/v3
username = placement
password = placement
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = 10.30.1.203
novncproxy_base_url = http://10.30.1.208:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
完成安装
重启计算服务:
[root@node3 ~]# systemctl restart openstack-nova-compute.service
启动Linuxbridge代理并配置它开机自启动:
[root@node3 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@node3 ~]# systemctl start neutron-linuxbridge-agent.service
检验nentron在计算节点是否OK
[root@node1 ~]# source admin-openstack.sh
[root@node1 ~]# openstack network agent list
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |
| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
代表计算节点的Linux bridge agent已成功连接到控制节点。
 
在节点节点node4重复node3的操作
 
查看neutron服务是否都起来了
[root@node1 ~]# openstack network agent list
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |
| c75f9b28-7010-4dfd-b646-ff79456f1435 | Linux bridge agent | node4 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
 
 
创建vxlan network "vxlan100_net"
注:管理员可以创建任意VNI的vxlan,普通用户创建的时候不能指定,只能在配置文件设置的ID中随机分配。
 
底层网络发生了什么变化
在控制节点之下brctl show,查看当前的网络结构如下
[root@node1 ~]# brctl show
bridge name    bridge id          STP enabled    interfaces
brq85ae5035-20 8000.42b8819dab66  no             tapd40d05b8-bd
                                                 vxlan-100
neutron创建了:
    vxlan100对应的网桥brq85ae5035-20
    vxlan interface vlan-100
    dhcp的tap设备tapd40d05b8-bd
 
vxlan100和tapd40d05b8-bd已经连接到brq85ae5035-20,vxlan的二层网络就绪。执行ip -d link show vxlan-100,查看vxlan interface的详细配置
11: vxlan-100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq85ae5035-20 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 42:b8:81:9d:ab:66 brd ff:ff:ff:ff:ff:ff promiscuity 1
    vxlan id 100 dev eth2 srcport 0 0 dstport 8472 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx
可见,vxlan-100的VNI是100,对应的VTEP网络网络接口是eth2.
 
下面是vxlan-100的dhcp部分的分析
[root@node1 ~]# openstack network list
+--------------------------------------+--------------+--------------------------------------+
| ID                                   | Name         | Subnets                              |
+--------------------------------------+--------------+--------------------------------------+
| 5ac5c948-909f-47ff-beba-a2ffaf917c5f | vlan99       | bbd536c6-a975-4841-8082-35b28de16ef0 |
| 85ae5035-203b-4ef7-b65c-397f80b5a8af | vxlan100_net | b81eec88-d7b5-49ef-bf45-7c251bebf165 |
+--------------------------------------+--------------+--------------------------------------+
[root@node1 ~]# ip netns list | grep 85ae5035-203b-4ef7-b65c-397f80b5a8af
qdhcp-85ae5035-203b-4ef7-b65c-397f80b5a8af (id: 1)
[root@node1 ~]# ip netns exec qdhcp-85ae5035-203b-4ef7-b65c-397f80b5a8af ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ns-d40d05b8-bd@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:6e:c0:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.16.100.10/24 brd 172.16.100.255 scope global ns-d40d05b8-bd
       valid_lft forever preferred_lft forever
    inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-d40d05b8-bd
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe6e:c044/64 scope link
       valid_lft forever preferred_lft forever
 
 
 
将instance连接到vlanx100_net
 
 
 
[root@node1 ~]# openstack server list
+--------------------------------------+---------------+--------+--------------------------------------------------+-----------------+--------+
| ID                                   | Name          | Status | Networks                                         | Image           | Flavor |
+--------------------------------------+---------------+--------+--------------------------------------------------+-----------------+--------+
| 929c6c23-804e-4cb7-86ad-d8db8554e33f | centos7.6-vm2 | ACTIVE | vlan99=172.16.99.117; vxlan100_net=172.16.100.19 | CentOS 7.6 64位 | 1c1g   |
| 027788d0-f189-4362-8716-2d0a9548dded | centos7.6-vm1 | ACTIVE | vlan99=172.16.99.123; vxlan100_net=172.16.100.12 | CentOS 7.6 64位 | 1c1g   |
+--------------------------------------+---------------+--------+--------------------------------------------------+-----------------+--------+
查看创建云主机后计算节点的网络情况
[root@node3 ~]# virsh list --name --uuid
929c6c23-804e-4cb7-86ad-d8db8554e33f instance-0000014b             
 
 
[root@node3 ~]# virsh domiflist 929c6c23-804e-4cb7-86ad-d8db8554e33f
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tap9784ef08-97 bridge     brq5ac5c948-90 virtio      fa:16:3e:e7:c4:39
tap31c64a21-76 bridge     brq85ae5035-20 virtio      fa:16:3e:cb:98:2c
 
[root@node3 ~]# brctl show
bridge name    bridge id          STP enabled    interfaces
brq5ac5c948-90 8000.525400a141e1  no             eth1.99
                                                 tap9784ef08-97
brq85ae5035-20 8000.d2d05820b08c  no             tap31c64a21-76
                                                 vxlan-100
 
 
 
centos7.6-vm1(172.16.100.12)和centos7.6-vm2(172.16.100.19)位于不同的计算节点,通过vxlan100相连,下面执行PING验证连通性。
在centos7.6-vm2上执行ping 172.16.100.12
 
 
 
 
排错:centos7.6-vm1(172.16.100.12)和centos7.6-vm2(172.16.100.19)相互之间ping不通,两台云主机的selinux和iptables都关闭了,最后我们把问题定位在安全组中,在现在云主机挂载的安全组中添加一个ICMP协议放行规则
 
 
 
 
 
理解L2 Population
L2 Population是用来提高VXLAN网络的Scalability( 可扩展性)的。
解决VXLAN网络节点很多后广播成本的问题,L2 Population的作用是在VTEP上提供Proxy ARP功能,使得VTEP能够预先获知VXLAN网络中下面的信息:
    VM IP-MAC对应关系
    VM-VTEP的对应关系
 
查看控制节点上的forwarding database,可以看到VTEP保存了centos7.6-vm1和centos7.6-vm2)的port信息
[root@node1 ~]# bridge fdb show dev vxlan-100
42:b8:81:9d:ab:66 vlan 1 master brq85ae5035-20 permanent
42:b8:81:9d:ab:66 master brq85ae5035-20 permanent
fa:16:3e:cb:98:2c master brq85ae5035-20
fa:16:3e:06:42:34 master brq85ae5035-20
00:00:00:00:00:00 dst 192.168.248.4 self permanent
00:00:00:00:00:00 dst 192.168.248.3 self permanent
fa:16:3e:06:42:34 dst 192.168.248.4 self permanent
fa:16:3e:cb:98:2c dst 192.168.248.3 self permanent
centos7.6-vm2的MAC为fa:16:3e:cb:98:2c
centos7.6-vm1的MAC为fa:16:3e:06:42:34
 
我们在查看两个计算节点上的forwarding database
[root@node3 ~]# bridge fdb show dev vxlan-100
d2:d0:58:20:b0:8c master brq85ae5035-20 permanent
d2:d0:58:20:b0:8c vlan 1 master brq85ae5035-20 permanent
00:00:00:00:00:00 dst 192.168.248.1 self permanent
00:00:00:00:00:00 dst 192.168.248.4 self permanent
fa:16:3e:06:42:34 dst 192.168.248.4 self permanent
fa:16:3e:6e:c0:44 dst 192.168.248.1 self permanent
 
[root@node4 ~]#  bridge fdb show dev vxlan-100
da:1e:c7:c0:6a:dc master brq85ae5035-20 permanent
da:1e:c7:c0:6a:dc vlan 1 master brq85ae5035-20 permanent
00:00:00:00:00:00 dst 192.168.248.1 self permanent
00:00:00:00:00:00 dst 192.168.248.3 self permanent
fa:16:3e:6e:c0:44 dst 192.168.248.1 self permanent
fa:16:3e:cb:98:2c dst 192.168.248.3 self permanent
centos7.6-vm2(fa:16:3e:cb:98:2c)要与centos7.6-vm1(fa:16:3e:06:42:34)通讯时,node3计算节点VTEP 192.168.248.3会将封装好的VXLAN数据包直接发送给node4计算节点VTEP 192.168.248.4
 
扩展:配置中不需要指明eth2是虚拟机内部通讯的网络接口,应该local_ip = x.x.x.x这样的配置已经指明了拥有其IP的端口就是vxlan通讯使用的接口
local_ip 指定了 VTEP的IP地址
控制节点(node1)的VTEP IP是 192.168.248.1
计算节点(node3)的VTEP IP是 192.168.248.3
计算节点(node4)的VTEP IP是 192.168.248.4
 
 
关于报错:
发现node4的Linux bridge agent没有起来
[root@node1 ~]# openstack network agent list
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |
| c75f9b28-7010-4dfd-b646-ff79456f1435 | Linux bridge agent | node4 | None              | XXX   | UP    | neutron-linuxbridge-agent |
| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
[root@node4 ~]# systemctl status neutron-linuxbridge-agent.service
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Sun 2020-02-09 11:17:22 CST; 2min 1s ago
  Process: 8499 ExecStart=/usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent --log-file /var/log/neutron/linuxbridge-agent.log (code=exited, status=1/FAILURE)
  Process: 8493 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 8499 (code=exited, status=1/FAILURE)
 
 
Feb 09 11:17:22 node4 systemd[1]: Unit neutron-linuxbridge-agent.service entered failed state.
Feb 09 11:17:22 node4 systemd[1]: neutron-linuxbridge-agent.service failed.
Feb 09 11:17:22 node4 systemd[1]: neutron-linuxbridge-agent.service holdoff time over, scheduling restart.
Feb 09 11:17:22 node4 systemd[1]: Stopped OpenStack Neutron Linux Bridge Agent.
Feb 09 11:17:22 node4 systemd[1]: start request repeated too quickly for neutron-linuxbridge-agent.service
Feb 09 11:17:22 node4 systemd[1]: Failed to start OpenStack Neutron Linux Bridge Agent.
Feb 09 11:17:22 node4 systemd[1]: Unit neutron-linuxbridge-agent.service entered failed state.
Feb 09 11:17:22 node4 systemd[1]: neutron-linuxbridge-agent.service failed.
 
查看日志
Feb 09 11:06:18 node4 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent...
-- Subject: Unit neutron-linuxbridge-agent.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit neutron-linuxbridge-agent.service has begun starting up.
Feb 09 11:06:18 node4 neutron-enable-bridge-firewall.sh[4773]: net.bridge.bridge-nf-call-iptables = 1
Feb 09 11:06:18 node4 neutron-enable-bridge-firewall.sh[4773]: net.bridge.bridge-nf-call-ip6tables = 1
Feb 09 11:06:18 node4 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
-- Subject: Unit neutron-linuxbridge-agent.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit neutron-linuxbridge-agent.service has finished starting up.
--
-- The start-up result is done.
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: Traceback (most recent call last):
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/bin/neutron-linuxbridge-agent", line 10, in <module>
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: sys.exit(main())
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/linuxbridge_neutron_agent.py", line 21, in main
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: agent_main.main()
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", line 985, in main
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: common_config.init(sys.argv[1:])
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/neutron/common/config.py", line 78, in init
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: **kwargs)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2502, in __call__
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: else sys.argv[1:])
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3166, in _parse_cli_opts
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: return self._parse_config_files()
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3202, in _parse_config_files
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: self._oparser.parse_args(self._args, namespace)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2330, in parse_args
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: return super(_CachedArgumentParser, self).parse_args(args, namespace)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1688, in parse_args
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: args, argv = self.parse_known_args(args, namespace)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1720, in parse_known_args
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: namespace, args = self._parse_known_args(args, namespace)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1926, in _parse_known_args
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: start_index = consume_optional(start_index)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1866, in consume_optional
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: take_action(action, args, option_string)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1794, in take_action
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: action(self, namespace, argument_values, option_string)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1695, in __call__
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: ConfigParser._parse_file(values, namespace)
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1950, in _parse_file
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: raise ConfigFileParseError(pe.filename, str(pe))
Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: oslo_config.cfg.ConfigFileParseError: Failed to parse /etc/neutron/plugins/ml2/linuxbridge_agent.ini: at /etc/neutron/plugins/ml2/linuxbridge_agent.ini:1, Invalid section (must end with ]): '[DEFAULT'
Feb 09 11:06:20 node4 systemd[1]: neutron-linuxbridge-agent.service: main process exited, code=exited, status=1/FAILURE
Feb 09 11:06:20 node4 systemd[1]: Unit neutron-linuxbridge-agent.service entered failed state.
Feb 09 11:06:20 node4 systemd[1]: neutron-linuxbridge-agent.service failed.
Feb 09 11:06:20 node4 systemd[1]: neutron-linuxbridge-agent.service holdoff time over, scheduling restart.
Feb 09 11:06:20 node4 systemd[1]: Stopped OpenStack Neutron Linux Bridge Agent.
-- Subject: Unit neutron-linuxbridge-agent.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit neutron-linuxbridge-agent.service has finished shutting down.
发现是/etc/neutron/plugins/ml2/linuxbridge_agent.ini第一行错误....,修改配置文件手误造成的,改[DEFAULT为[DEFAULT]
重启服务
[root@node4 ~]# systemctl reset-failed neutron-linuxbridge-agent.service
[root@node4 ~]# systemctl start neutron-linuxbridge-agent.service
 
这次查看日志正常了
Feb 09 11:27:14 node4 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent...
-- Subject: Unit neutron-linuxbridge-agent.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit neutron-linuxbridge-agent.service has begun starting up.
Feb 09 11:27:14 node4 neutron-enable-bridge-firewall.sh[9710]: net.bridge.bridge-nf-call-iptables = 1
Feb 09 11:27:14 node4 neutron-enable-bridge-firewall.sh[9710]: net.bridge.bridge-nf-call-ip6tables = 1
Feb 09 11:27:14 node4 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
-- Subject: Unit neutron-linuxbridge-agent.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit neutron-linuxbridge-agent.service has finished starting up.
--
-- The start-up result is done.
Feb 09 11:27:14 node4 polkitd[3346]: Unregistered Authentication Agent for unix-process:9704:132919 (system bus name :1.52, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 09 11:27:17 node4 sudo[9738]:  neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
Feb 09 11:27:17 node4 systemd[1]: Started Session c1 of user root.
-- Subject: Unit session-c1.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-c1.scope has finished starting up.
--
-- The start-up result is done.
Feb 09 11:27:17 node4 sudo[9738]: pam_unix(sudo:session): session opened for user root by (uid=0)
Feb 09 11:30:01 node4 systemd[1]: Started Session 10 of user root.
-- Subject: Unit session-10.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-10.scope has finished starting up.
--
-- The start-up result is done.
 
 
[root@node1 ~]# openstack network agent list
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |
| c75f9b28-7010-4dfd-b646-ff79456f1435 | Linux bridge agent | node4 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
 

openstack高可用集群19-linuxbridge结合vxlan的更多相关文章

  1. openstack高可用集群21-生产环境高可用openstack集群部署记录

    第一篇 集群概述 keepalived + haproxy +Rabbitmq集群+MariaDB Galera高可用集群   部署openstack时使用单个控制节点是非常危险的,这样就意味着单个节 ...

  2. openstack高可用集群15-后端存储技术—GlusterFS(分布式存储)

         

  3. openstack高可用集群18-Ceph和openstack的对接

    Ceph对接Openstack 官方文档: https://docs.ceph.com/docs/master/rbd/rbd-openstack/   Ceph的一个使用场景是结合Openstack ...

  4. openstack高可用集群16-ceph介绍和部署

    Ceph Ceph是一个可靠.自动重均衡.自动恢复的分布式存储系统,根据场景划分可以将Ceph分为三大块,分别是对象存储.块设备和文件系统服务.块设备存储是Ceph的强项. Ceph的主要优点是分布式 ...

  5. openstack高可用集群17-openstack集成Ceph准备

    Openstack集成Ceph准备 Openstack环境中,数据存储可分为临时性存储与永久性存储. 临时性存储:主要由本地文件系统提供,并主要用于nova虚拟机的本地系统与临时数据盘,以及存储gla ...

  6. openstack高可用集群20-openstack计算节点宕机迁移方案

    openstack计算节点宕机迁移方案   情景一:/var/lib/nova/instances/ 目录不共享的处理方法(类似手动迁移云主机到其他节点)

  7. [ Openstack ] Openstack-Mitaka 高可用之 Pacemaker+corosync+pcs 高可用集群

    目录 Openstack-Mitaka 高可用之 概述    Openstack-Mitaka 高可用之 环境初始化    Openstack-Mitaka 高可用之 Mariadb-Galera集群 ...

  8. 【转】harbor仓库高可用集群部署说明

    之前介绍Harbor私有仓库的安装和使用,这里重点说下Harbor高可用集群方案的部署,目前主要有两种主流的Harbor高可用集群方案:1)双主复制:2)多harbor实例共享后端存储. 一.Harb ...

  9. linux系统下对网站实施负载均衡+高可用集群需要考虑的几点

    随着linux系统的成熟和广泛普及,linux运维技术越来越受到企业的关注和追捧.在一些中小企业,尤其是牵涉到电子商务和电子广告类的网站,通常会要求作负载均衡和高可用的Linux集群方案. 那么如何实 ...

随机推荐

  1. FL studio系列教程(十):FL Studio中如何新建样本

    FL Studio中强调以样本为核心的编曲模式.样本其实就是一个小的音序片段,可以是单独的乐器或单独的打击乐,还可以是他们组合的一个小音序片段,它是我们学习编曲的最基础知识.所以本文主要为大家讲解的是 ...

  2. 苹果电脑下载器Folx迷你窗口有什么用途

    苹果电脑下载器Folx的迷你窗口功能,及时地了解不同任务的下载进度.另外,也可以通过带宽活动窗口了解任务的占用带宽情况,以便及时限制过多的带宽占用.接下来,一起来看看如何操作吧. 图1:软件界面 一. ...

  3. css3系列之transform 详解scale

    scale() scaleX() scaleY() scaleZ() scale3d() 改变的不是元素的宽高,而是 X 和 Y 轴的刻度 本章有个很冷门的知识点 → scale 和 rotate 一 ...

  4. PowerManagerService流程分析

    一.PowerManagerService简介 PowerManagerService主要服务Android系统电源管理工作,这样讲比较笼统,就具体细节上大致可以认为PowerManagerServi ...

  5. Windows 的这款工具,有时让我觉得 Mac 不是很香

    上次写了个 cheat.sh 在手,天下我有,小伙伴们热情高涨,觉得这是一个没有杂质的好工具:也有小伙伴抱怨说对 Windows 用户不是特别友好 (其实用 curl API 是没啥问题的).为了「雨 ...

  6. CentOS 7下使用systemctl为Nginx启用进程守护实现开机自启

    1.cd到指定目录 cd /usr/lib/systemd/system 2.创建nginx.service vi nginx.service 3.输入以下内容,路径为nginx安装路径 [Unit] ...

  7. IntelliJ IDEA 2020.3正式发布,年度最后一个版本很讲武德

    仰不愧天,俯不愧人,内不愧心.关注公众号[BAT的乌托邦],有Spring技术栈.MyBatis.JVM.中间件等小而美的原创专栏供以免费学习.分享.成长,拒绝浅尝辄止.本文已被 https://ww ...

  8. 深度优先遍历&广度优先遍历

    二叉树的前序遍历,中序遍历,后序遍历 树的遍历: 先根遍历--访问根结点,按照从左至右顺序先根遍历根结点的每一颗子树. 后根遍历--按照从左至右顺序后根遍历根结点的每一颗子树,访问根结点. 先根:AB ...

  9. spring框架使用c3po链接数据库

    编辑工具:idea 1.配置pom.xml文件(创建模板时软件自动创建) 导入spring的核心架包 全部架包官网:https://mvnrepository.com/ 1 <dependenc ...

  10. 并发编程实战-J.U.C核心包

    J.U.C - AQS java.util.concurrent(J.U.C)大大提高了并发性能,AQS 被认为是 J.U.C 的核心.它核心是利用volatile和一个维护队列. AQS其实就是ja ...