Openstack Neutron L2 Population
Why do we need it, whatever it is?
VM unicast, multicast and broadcast traffic flow is detailed in my previous post:
TL;DR: Agent OVS flow tables implement learning. That is, any unknown unicast destination (IE: MAC addresses the virtual switch is not familiar with), multicast or broadcast traffic is flooded out tunnels to all other compute nodes. Any incoming traffic is used for its source MAC address. That MAC address is added to a learning table, so future traffic to that MAC address is not flooded but sent directly to the hosting node. There’s several inefficiencies here:
- The MAC addresses aren’t initially known by the agents, but the Neutron service has full knowledge of the topology
- There’s still a lot of broadcasts going around in the form of ARP requests. Maybe we can optimize those away?
- More about broadcasts: What if a node isn’t hosting any ports in a specific network? Should this node receive broadcast traffic designated to that network?
A great visual explanation for the third point, stolen shamelessly from the official OpenStack documentation:
Overview
When using the ML2 plugin with tunnels and a new port goes up, ML2 sends a update_port_postcommit notification which is picked up and processed by the l2pop mechanism driver. l2 pop then gathers the IP and MAC of the port, as well as the host that the port was scheduled on; It then sends an RPC notification to all layer 2 agents. The agents uses the notification to solve the three issues detailed above.
Configuration
ml2_conf.ini:
[ml2]
mechanism_drivers = ..., l2population, ...
[agent]
l2_population = True
Deep-Dive & Code
plugins/ml2/drivers/l2pop/mech_driver.py:update_port_postcommitcalls _update_port_up. In _update_port_up we send the new ports’ IP and MAC address to all agents via a ‘add_fdb_entries’ RPC fanout cast. Additionally, if this new port is the first port in a network on the scheduled agent, then we send all IP and MAC addresses on the network to that agent.
‘add_fdb_entries’ is picked up via agent/l2population_rpc.py:add_fdb_entries, which calls fdb_add if the RPC call was a fanout, or directed to the local host.
fdb_add is implemented by the OVS and LB agents: plugins/openvswitch/agent/ovs_neutron_agent.py and plugins/linuxbridge/agent/linuxbridge_neutron_agent.py.
In the OVS agent, fdb_add accomplishes three main things:
For each port received:
- Setup a tunnel to the remote agent if one does not already exist
- If its a flood entry, setup a flood flow to the remote network. Reminder: A flood flow is sent out to all agents in case a port goes up which happens to be the first port for an agent & network pair
- If its a unicast entry, add it to the unicast learning table
- A big fat TO-DO about ARP replies. Implemented in the Icehouse release with this patch: https://review.openstack.org/#/c/49227/
Finally, with l2_population = True, a bunch of code is in the ovs agent is disabled. tunnel_update and tunnel_sync RPC messages are ignored, and replaced by fdb_add, fdb_remove.
Supported Topologies
All of this is fully supported since the Havana release when using GRE and VXLAN tunneling with the ML2 plugin, apart from the ARP resolution optimization which is implemented only for the Linux bridge agent with the VXLAN driver. ARP resolution will be added to the OVS agent with GRE and VXLAN drivers in the Icehouse release.
Links
http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html#ml2_l2pop_scenarios
本文转自http://assafmuller.com/2014/02/23/ml2-address-population/
Openstack Neutron L2 Population的更多相关文章
- Neutron 理解 (4): Neutron OVS OpenFlow 流表 和 L2 Population [Netruon OVS OpenFlow tables + L2 Population]
学习 Neutron 系列文章: (1)Neutron 所实现的虚拟化网络 (2)Neutron OpenvSwitch + VLAN 虚拟网络 (3)Neutron OpenvSwitch + GR ...
- 配置 L2 Population - 每天5分钟玩转 OpenStack(114)
前面我们学习了L2 Population 的原理,今天讨论如何在 Neutron 中配置和启用此特性. 目前 L2 Population 支持 VXLAN with Linux bridge 和 VX ...
- L2 Population 原理 - 每天5分钟玩转 OpenStack(113)
前面我们学习了 VXLAN,今天讨论跟 VXLAN 紧密相关的 L2 Population. L2 Population 是用来提高 VXLAN 网络 Scalability 的. 通常我们说某个系统 ...
- openstack——neutron网络服务
一.neutron 介绍: Neutron 概述 传统的网络管理方式很大程度上依赖于管理员手工配置和维护各种网络硬件设备:而云环境下的网络已经变得非常复杂,特别是在多租户场景里,用户随时都可能需要 ...
- Openstack Neutron:二层技术和实现
目录 - 二层的实现 - 1.本地联通与隔离: - Linux bridge实现方式: - local - Flat - VLAN - VXLAN - Open vswitch实现方式 - local ...
- Openstack neutron:云数据中心底层网络架构
目录 - 目录 - 云数据中心流量类型 - NSX整体网络结构 - 管理网络(API网络) - 租户网络 - 外联网络 - 存储网络 - openstack整体网络结构 - 管理网络:(上图中蓝线) ...
- Openstack Neutron OVS ARP Responder
ARP – Why do we need it? In any environment, be it the physical data-center, your home, or a virtual ...
- 深入浅出新一代云网络——VPC中的那些功能与基于OpenStack Neutron的实现(一)
VPC的概念与基于vxlan的overlay实现很早就有了,标题中的"新"只是一个和传统网络的相对概念.但从前年开始,不同于以往基础网络架构的新一代SDN网络才真正越来越多的走进国 ...
- openstack neutron中涉及的网络设备
一.openstack neutron网络设备介绍 Bridge(网桥) 用于将两个LAN连接起来,主要靠的MAC地址学习机制.当网桥的Port收到包时会将包的源mac和port ID关联起来记入ma ...
随机推荐
- 识别有效的IP地址和掩码并进行分类统
#include<iostream> #include<stdio.h> #include<string.h> using namespace std; int c ...
- Django1.9开发博客(14)- 集成Xadmin
xadmin是一个django的管理后台实现,使用了更加灵活的架构设计及Bootstrap UI框架, 目的是替换现有的admin,国人开发,有许多新的特性: 兼容 Django Admin 使用 B ...
- IntelliJ UI安装
- NET中的规范标准注释(二) -- 创建帮助文档入门篇
一.摘要 在本系列的第一篇文章介绍了.NET中XML注释的用途, 本篇文章将讲解如何使用XML注释生成与MSDN一样的帮助文件.主要介绍NDoc的继承者:SandCastle. 二.背景 要生成帮助文 ...
- OC语言description方法和sel
OC语言description方法和sel 一.description方法 Description方法包括类方法和对象方法.(NSObject类所包含) (一)基本知识 -description(对象 ...
- JDK1.5新特性
静态导入 import static java.util.Collections.*; import static java.lang.System.out; 1.如果静态导入的成员与本类的成员存在同 ...
- c/c++面试题(3)strcat/strcmp/strlen/strcpy的实现
1.编写一个函数实现strlen以及strcpy函数. strcpy函数. 后面的字符串拷贝到一个字符数组中,要求拷贝好的字符串在字符数组的首 地址,并且只拷贝到'\0'的位置.原型是 char* m ...
- MJExtension框架介绍
MJExtension框架介绍 标签: MJExtension 2015-05-01 08:22 1120人阅读 评论(0) 收藏 举报 分类: Foundation(14) 版权声明:本文为博主 ...
- javaEE(web)SEO优化 Yahoo军规
javaEE(web)SEO优化 Yahoo军规 1.尽可能减少HTTP请求数2.使用CDN3.添加Expire/Cache-Control头4.启用Gzip压缩5.将CSS房在页面最上方6.将Scr ...
- SQL匹配顺序
SELECT%DISTINCT%%FIELD%FROM %TABLE%%JOIN%%WHERE%%GROUP%%HAVING%%ORDER%%LIMIT% %UNION%%COMMENT%