Multicast on Openstack
I test multicast on openstack. I use external Router in this test.
Openstack Environment:
Havana (ML2 + OVS)
Test Environment:
VM1 is in Vlan 850. VM2,VM3,VM4 are in Vlan 820.
VM2 and VM3 are on the same compute node ci91szcmp003 , VM1 and VM4 are on another compute node ci91szcmp004.
Here is the topology:
The security Group I'm using is as following, UDP 5001 is the port which I used to test Multicast Packet:
Test Tool:
I use iperf to test Multicast. You can easily install this tool on CentOS with the following command:
yum install iperf
If you do not know how to use this tool, you can see "man iperf" for more detail.
Simulate Multicast Sender with the following command:
iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
Simulate Multicast Receiver with the following command:
iperf -s -u -B 224.1.1.1 -i 1
Test Case and Result:
VMs in the same Vlan
Case #1:
VM2 send multicast packet. VM3 and VM4 do not join the Multicast Group, use tcpdump to see if they can receive Multicast packets.
Multicast packet flow from VM2 to VM3 is VM2 -> OVS -> VM3, as following:
Multicast packet flow from VM2 to VM4 is VM2 -> OVS -> Phy Switch -> OVS -> VM3, as following:
Result and Log:
On VM2 I send multicast packet:
[root@VM2 ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 10.224.159.146 port 54457 connected with 224.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec
[ 3] Sent 269 datagrams On VM3, I did not join the Multicast Group and use tcpdump to dump packet:
[root@VM3 ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.1
eth0 1 224.0.0.1
[root@VM3 ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
04:45:59.213678 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.224902 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.236114 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.247387 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.258611 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.269744 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.281011 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
... On VM4, I did not join the Multicast Group and use tcpdump to dump packet:
[root@VM4 ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.1
eth0 1 224.0.0.1
[root@VM4 ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
We can see that VM2 and VM3 are in the same Vlan and on the same compute node. They are connected with Openvswitch.Openvswitch do not support Multicast snooping. So it will broadcast all the Multicast packets. So Even VM3 did not join the multicast
group, it can also receive the Multicast packets.And at the same time, We can see VM4 did not receive the Multicast packet because VM4 is on another Compute node. The two compute nodes are connected to a physical switch. And the physical Switch support igmp
snooping and it will not broadcast the multicast packet to other compute nodes.
Case #2:
VM2 send multicast packet. VM4 join the Multicast Group to see if it can receive Multicast packets.
Result and Log:
On VM2 I send multicast packet:
[root@VM2 ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 10.224.159.146 port 35844 connected with 224.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec
[ 3] Sent 269 datagrams On VM4 I receive multicast packet:
[root@VM4 ~]# iperf -s -u -B 224.1.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 224.1.1.1
Joining multicast group 224.1.1.1
Receiving 1470 byte datagrams
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 224.1.1.1 port 5001 connected with 10.224.159.146 port 35844
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 128 KBytes 1.05 Mbits/sec 0.027 ms 0/ 89 (0%)
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec 0.026 ms 0/ 89 (0%)
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec 0.018 ms 0/ 89 (0%)
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec 0.019 ms 0/ 269 (0%) [root@VM4 ~]# netstat -n -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.1
eth0 1 224.1.1.1
eth0 1 224.0.0.1
We can see that VM2 and VM4 are in the same Vlan and on different compute nodes. After VM4 join the Multicast Group, VM2 send the packets and VM4 can receive the multicast packets.
VMs in different Vlan and use external Router
Case #3:
VM1 send Multicast packet. VM2 and VM4 do not join the Multicast Group, tcpdump to see if they can receive Multicast packets.
Multicast packet flow from VM1 to VM2 is VM1 -> OVS -> Phy Switch -> Phy Router -> Phy Switch -> OVS -> VM2, as following:
Multicast packet flow from VM1 to VM4 is VM1 -> OVS -> Phy Switch -> Phy Router -> Phy Switch -> OVS -> VM4, as following:
Result and Log:
[root@VM1 ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 10.224.148.94 port 60820 connected with 224.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec
[ 3] Sent 269 datagrams [root@VM2 ~]# netstat -n -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.1
eth0 1 224.0.0.1
[root@VM2 ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes [root@VM4 ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.1
eth0 1 224.0.0.1
[root@VM4 ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
Case #4:
VM1 send Multicast packet. VM2 join the Multicast Group to see if it can receive Multicast packets.
Result and Log:
[root@VM1 ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 10.224.148.94 port 41301 connected with 224.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec
[ 3] Sent 269 datagrams [root@VM2 ~]# iperf -s -u -B 224.1.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 224.1.1.1
Joining multicast group 224.1.1.1
Receiving 1470 byte datagrams
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 224.1.1.1 port 5001 connected with 10.224.148.94 port 41301
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 128 KBytes 1.05 Mbits/sec 0.029 ms 0/ 89 (0%)
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec 0.031 ms 0/ 89 (0%)
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec 0.025 ms 0/ 89 (0%)
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec 0.025 ms 0/ 269 (0%) [root@VM2 ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.1
eth0 1 224.1.1.1
eth0 1 224.0.0.1
Conclusion:
1. Openvswitch do not support IGMP snooping, and if the VM on one compute node send multicast packets, all the VM on the same compute and in the same Vlan will receive the multicast packets. This may have some performance loss
here. As I know, Cisco N1K can support IGMP snooping. If we use N1K can get better performance here.
2. If VMs use External Router, we need config the External Router to support IGMP, and in this situation the Multicast can work well in Openstack. If you use neutron-l3-agent, it will use iptables + namespace to simulate Virtual Router, it does
not support Multicast now.
Multicast on Openstack的更多相关文章
- 完整部署CentOS7.2+OpenStack+kvm 云平台环境(5)--问题解决
一.[root@openstack-server ~]# nova listERROR (CommandError): You must provide a username or user id v ...
- 理解 OpenStack 高可用(HA)(3):Neutron 分布式虚拟路由(Neutron Distributed Virtual Routing)
本系列会分析OpenStack 的高可用性(HA)概念和解决方案: (1)OpenStack 高可用方案概述 (2)Neutron L3 Agent HA - VRRP (虚拟路由冗余协议) (3)N ...
- 理解 OpenStack 高可用(HA)(2):Neutron L3 Agent HA 之 虚拟路由冗余协议(VRRP)
本系列会分析OpenStack 的高可用性(HA)概念和解决方案: (1)OpenStack 高可用方案概述 (2)Neutron L3 Agent HA - VRRP (虚拟路由冗余协议) (3)N ...
- openstack网络(neutron)模式之GRE的基本原理
neutron网络目的是为OpenStack云更灵活的划分网络,在多租户的环境下提供给每个租户独立的网络环境. neutron混合实施了第二层的VLAN和第三层的路由服务,它可为支持的网络提供防火墙, ...
- centos7.1 x86_64系统安装openstack(Mitaka)一
一.Openstack各组件简单介绍 keystone:身份认证服务 glance:镜像服务 nova:计算服务 neutron:网络服务 Cinder:块存储服务 Swift:对象存储服务 heat ...
- 深入理解openstack网络架构(1)
原文地址: https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture 译文转载自:http://b ...
- 理解 OpenStack Swift (1):OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置
本系列文章着重学习和研究OpenStack Swift,包括环境搭建.原理.架构.监控和性能等. (1)OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置 ( ...
- Openstack Basic Networking 翻译
自己翻译,加强理解.并学习英文和写作. 英文地址:http://docs.openstack.org/networking-guide/intro_basic_networking.html 目录: ...
- OpenStack网络指导手册 -基本网络概念
转自:http://blog.csdn.net/zztflyer/article/details/50441200 目录(?)[-] 以太网Ethernet 虚拟局域网VLANs 子网和地址解析协议S ...
随机推荐
- Linux内核升级
一.测试环境 CentOS6.5 X86 64位 内核版本为 2.6.32 VM 10.07 二.编译内核版本 2.1.kernel 3.2.71 2.2.kernel 3.4.108 2.3.ker ...
- Android从raw、assets、SD卡中获取资源文件内容
先顺带提一下,raw文件夹中的文件会和project一起经过编译,而assets里面的文件不会~~~ 另外,SD卡获取文件需要权限哦! //从res文件夹中的raw 文件夹中获取文件并读取数据 p ...
- C#的百度地图开发(三)依据坐标获取位置、商圈及周边信息
原文:C#的百度地图开发(三)依据坐标获取位置.商圈及周边信息 我们得到了百度坐标,现在依据这一坐标来获取相应的信息.下面是相应的代码 public class BaiduMap { /// < ...
- eclipse failed to create the java virtual machine 问题图文解析(转)
clipse failed to create the java virtual machine 解决方法: 1.问题现象 2.java虚拟机初始化失败!寻找eclipse解压路径 3.寻找ecl ...
- 采用DWR、maven保存数据到数据库
一.原理: Ajax是时下比较流行的一种web界面设计新思路,其核心思想是从浏览器获取XMLHttp对象与服务器端进行交互. DWR(Direct Web Remoting)就是实现了这种Ajax技术 ...
- hdu4857 逃生 bestcoder round1 A
题目要求要求在满足约束条件的情况下,使小的序号尽力靠前. 坑点就在这里.小的序号尽量靠前并非代表字典序,它要求多种情况时,先使1靠前(可能1仅仅能在第2或第3位 那么就要使它在第2位),其次2,3. ...
- [欧拉] poj 2513 Colored Sticks
主题链接: http://poj.org/problem? id=2513 Colored Sticks Time Limit: 5000MS Memory Limit: 128000K Tota ...
- Struts开发问题集锦
在struts2de 1.6以前版本,都是用<s:datepicker>标签来获取时间,1.8后可以用struts-dojo.plugin里的<sx:datetimepicker&g ...
- 意外地解决了一个WPF布局问题
原文:意外地解决了一个WPF布局问题 今天做了一个小测试,意外地将之前的一个困扰解决了,原问题见<WPF疑难杂症会诊>中的“怎么才能禁止内容撑大容器?” 以前我是在外侧嵌套Canvas容器 ...
- 并查集专辑 (poj1182食物链,hdu3038, poj1733, poj1984, zoj3261)
并查集专题训练地址,注册登录了才能看到题目 并查集是一个树形的数据结构, 可以用来处理集合的问题, 也可以用来维护动态连通性,或者元素之间关系的传递(关系必须具有传递性才能有并查集来维护,因为并查集 ...