link

http://rabbitstack.github.io/deploying-cloud-foundry-on-openstack-juno-and-xenserver-part-i/

Cloud Foundry ecosystem had been blowing my mind for a long time, and I think it really has made an IT disruption letting us focus on applications as the essential unit of business process. There is no need for us to worry about all those painful stuffs like scalability, multi tenancy and application health. Cloud Foundry will do that nasty job for us, and much more. It could be considered as an operating system for the cloud.

While I was investigating about Cloud Foundry, I also figured out its agnostic nature which enable it to be easily deployed on AWS, vSphere or OpenStack. That is how I got motivated to acquire one of those cheap Dell rack servers on eBay and start the experiment. XenServer 6.2 is what I choose as hypervisor to be orchestrated by OpenStack. Unfortunately, the documentation about setting up the OpenStack compute node on XenServer is rather incomplete, deprecated and very hard to follow if you are doing it for the first time. So, let's see how to proceed step by step, and prepare our OpenStack environment for Cloud Foundry instance deployment. I already assume you have successfully installed and configured the controller node.

Installing paravirtualized XenServer domain

OpenStack compute node needs a paravirtualized virtual machine running on each XenServer instance. Paravirtualized VM basically has a recompiled kernel so it can talk directly to the hypervisor API. If Centos is your distribution of choice then the easiest way to set up a PV virtual machine is by using this kickstart file.

Let’s first create the VM. Please note we have to use Red Hat 6 template, even if we are going to install Centos 7 distribution. For XenServer 6.5 this is not necessary.

TEMPLATE_UUID=$(xe template-list | grep -B1 'name-label.*Red Hat.* 6.*64-bit' | awk -F: '/uuid/{print $2}'| tr -d " ")
VMUUID=$(xe vm-install new-name-label="compute" template=${TEMPLATE_UUID})
xe vm-param-set uuid=$VMUUID other-config:install-repository=http://mirror.centos.org/centos/7/os/x86_64
xe vm-param-set uuid=$VMUUID PV-args="ks=https://gist.githubusercontent.com/bhnedo/4648499f5680207e86ec/raw/4239fd8d0e10f7f2759d600b28b52f1744d9b5ad/kickstart-centos-minimal.cfg ksdevice=eth0"

Find out the network UUID for the bridge that has access to the Internet. Note that one Xen bridge is created for every physical network adapter on your machine. Get a list of XenServer networks and store the UUID for the appropriate bridge (in most cases it will be xenbr0).

xe network-list
NETUUID=$(xe network-list bridge=xenbr0 --minimal)

Create a virtual network interface (VIF) and attach it to the virtual machine and network. Start the VM and watch the installation progress from XenCenter.

xe vif-create vm-uuid=$VMUUID network-uuid=$NETUUID mac=random device=0
xe vm-start uuid=$VMUUID

When installation process is done export the VM so we have the base image to use for the storage node too.

xe vm-export uuid=$VMUUID filename=openstack-juno-centos7.xva

Notice: PyGrub doesn't support grub2 boot loader. You will need to apply the followingpatch in order to boot the VM properly. This issue has been corrected in XenServer 6.5 release.

Installing and configuring compute service

Once you have a running PV guest the next step is to install OpenStack plugins for XenServer Dom0. These will let the compute node to communicate with Xen XAPI in order to provision virtual machines, set up networking, storage, etc. Download the latest Openstack Juno branch, unzip and copy the content ofplugins/xenserver/xenapi/etc/xapi.d/plugins directory to/etc/xapi.d/plugins. Also ensure that added files are executable.

cd /tmp
wget https://github.com/openstack/nova/archive/master.zip
unzip master.zip
cp /tmp/nova-juno-stable/plugins/xenserver/xenapi/etc/xapi.d/plugins/* /etc/xapi.d/plugins
chmod a+x /etc/xapi.d/plugins/*

Log into your newly installed compute node (default password for the root user ischangeit) and run these commands to enable OpenStack Juno repository and upgrade the packages on your host.

yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
yum upgrade

If your kernel is upgraded you will probably need to reboot the machine after upgrade process in order to activate the new kernel. Now install the required packages for the compute hypervisor components and nova-network legacy networking.

yum install openstack-nova-compute sysfsutils
yum install openstack-nova-network openstack-nova-api

Xenapi python package is also required, so install it using pip package manager.

easy_install pip
pip install xenapi

I didn't wanted to setup another network node for Neutron, even legacy networking is deprecated in favor of after-mentioned component. If you need advanced features like VLANs, virtual routing, switching, tenant isolation and so on, follow these docs on how to add Neutron network.

Now we need to edit the /etc/nova/nova.conf configuration file.

  1. Message broker settings

    Configure RabbitMQ messaging system in the [DEFAULT] section:

     [DEFAULT]
    rpc_backend = rabbit
    rabbit_host = controller
    rabbit_userid = RABBIT_USER
    rabbit_password = RABBIT_PASSWORD
  2. Keystone authentication

    Modify [DEFAULT] and [keystone_authtoken] sections to configure authentication service access:

     [DEFAULT]
    auth_strategy = keystone [keystone_authtoken]
    auth_uri = http://controller:5000/v2.0
    identity_uri = http://controller:35357
    admin_tenant_name = service
    admin_user = nova
    admin_password = NOVA_PASSWORD
  3. Network configuration

    Before proceeding with network parameters, you will need to create a second VIF and attach it to the compute VM.

     $ xe vif-create vm-uuid=$VMUUID network-uuid=$NETUUID mac=random device=1
    $ xe vm-start uuid=$VMUUID

    This network interface will be connected to the Linux bridge and at same time will act as default gateway for all VM instances spawned inside OpenStack. The traffic forwarding between tenants is done at L2 level through this bridge. You should end up with the following interfaces and xenbr0 up after creating the network in OpenStack.

     $ ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 192.168.1.106 netmask 255.255.255.0 broadcast 192.168.1.255
    inet6 fe80::90b3:8fff:fe2c:1d09 prefixlen 64 scopeid 0x20
    ether 92:b3:8f:2c:1d:09 txqueuelen 1000 (Ethernet)
    RX packets 3016 bytes 1189159 (1.1 MiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 2812 bytes 636656 (621.7 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet6 fe80::44ab:daff:fe21:46d4 prefixlen 64 scopeid 0x20
    ether 46:ab:da:21:46:d4 txqueuelen 1000 (Ethernet)
    RX packets 611 bytes 111213 (108.6 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 38 bytes 4943 (4.8 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 192.168.1.50 netmask 255.255.255.0 broadcast 192.168.1.255
    inet6 fe80::4034:39ff:fecd:b9b3 prefixlen 64 scopeid 0x20
    ether 46:ab:da:21:46:d4 txqueuelen 0 (Ethernet)
    RX packets 89 bytes 11222 (10.9 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 28 bytes 3967 (3.8 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 $ brctl show
    bridge name bridge id STP enabled interfaces
    xenbr0 8000.46abda2146d4 no eth1

    In the[DEFAULT] section you will need to put these properties:

     [DEFAULT]
    network_api_class = nova.network.api.API
    security_group_api = nova
    network_manager = nova.network.manager.FlatDHCPManager
    allow_same_net_traffic = True
    multi_host = True
    send_arp_for_ha = True
    share_dhcp_address = True
    force_dhcp_release = True
    flat_network_bridge = xenbr0
    flat_interface = eth1
    public_interface = eth0 my_ip = MANAGEMENT_INTERFACE_IP
    firewall_driver = nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
  4. Hypervisor settings

    Enable Xen compute driver in the [DEFAULT] section, XAPI endpoint and credentials in the [xenserver] section:

     [DEFAULT]
    compute_driver = xenapi.XenAPIDriver [xenserver]
    connection_url = http://XENSERVER_MANAGEMENT_IP
    connection_username = XENSERVER_USERNAME
    connection_password = XENSERVER_PASSWORD
  5. Image service and VNC access

    We are almost done. In the [glance] section configure the location of the Image Service. In the [DEFAULT] section enable remote console access. When deploying OpenStack services for the first time, it's a good idea to enable verbose logging too.

     [glance]
    host = controller [DEFAULT]
    vnc_enabled = True
    vncserver_listen = 0.0.0.0
    vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP
    novncproxy_base_url = http://controller:6080/vnc_auto.html verbose = true

Start the Compute and Network services and configure them to be automatically started at boot time.

systemctl enable openstack-nova-compute.service openstack-nova-network.service openstack-nova-metadata-api.service
systemctl start openstack-nova-compute.service openstack-nova-network.service openstack-nova-metadata-api.service

Make sure the nova-compute and nova-network are up and running by executing this command on the controller node:

nova service-list
+----+------------------+---------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated_at |
+----+------------------+---------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | hydra | internal | enabled | up | 2015-01-31T17:57:06.000000 |
| 2 | nova-cert | hydra | internal | enabled | up | 2015-01-31T17:57:06.000000 |
| 3 | nova-scheduler | hydra | internal | enabled | up | 2015-01-31T17:57:06.000000 |
| 4 | nova-conductor | hydra | internal | enabled | up | 2015-01-31T17:57:06.000000 |
| 5 | nova-compute | compute | nova | enabled | up | 2015-01-31T17:57:07.000000 |
| 6 | nova-network | compute | internal | enabled | up | 2015-01-31T17:57:00.000000 |
+----+------------------+---------+----------+---------+-------+----------------------------+-

Installing and configuring storage node

We can start by creating the storage node VM from the base template image we had exported. Run these commands in the XenServer console:

SRUUID = $(xe sr-list name-label="Local storage" --minimal)
xe vm-import filename=openstack-juno-centos7.xva force=true sr-uuid=$SRUUID preserve=true

You will need to create and attach the VDI where cinder volumes will be stored. Get the UUID of your newly imported VM, and then run these commands.

VDIUUID = $(xe vdi-create sr-uuid=$SRUUID name-label="cinder" type=user virtual-size=250GiB)
VBDUUID = $(xe vbd-create vm-uuid=$VMUUID vdi-uuid=$VDIUUID device=1)
xe vbd-plug uuid=$VBDUUID

Install the required dependencies and start the LVM metadata service.

yum install lvm2
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

Partition the disk in order to create the LVM physical volume and the volume group labeled as cinder-volumes . Change /dev/xvdb1 with your partition.

pvcreate /dev/xvdb1
vgcreate cinder-volumes /dev/xvdb1

It is also necessary to instruct the LVM which block storage devices should be scanned. Edit the /etc/lvm/lvm.conf file and modify the filter section to include the created volume group.

devices {
...
filter = [ "a/xvda/", "a/xvdb/", "r/.*/"]
...
}

We are now ready to install and configure Block Storage components and dependencies. I wasn't able to get iSCSI LUNs to work using targetcli, probably because XenServer relies on SCSI initiator utilities. The solution was to usescsi-target-utils instead of it.

yum install scsi-target-utils
yum install openstack-cinder python-oslo-db MySQL-python

Edit the /etc/cinder/cinder.conf configuration file.

    1. Message broker settings

      Configure RabbitMQ messaging system in the [DEFAULT] section:

       [DEFAULT]
      rpc_backend = rabbit
      rabbit_host = controller
      rabbit_userid = RABBIT_USER
      rabbit_password = RABBIT_PASSWORD
    2. Keystone authentication

      Modify [DEFAULT] and [keystone_authtoken] sections to configure authentication service access:

       [DEFAULT]
      auth_strategy = keystone [keystone_authtoken]
      auth_uri = http://controller:5000/v2.0
      identity_uri = http://controller:35357
      admin_tenant_name = service
      admin_user = cinder
      admin_password = CINDER_PASSWORD
    3. Database connection

      In the [database] section change the MySQL connection string:

       [database]
      connection = mysql://cinder:CINDER_DB_PASSWORD@controller/cinder
    4. Image service and management IP address

      In the [DEFAULT] section configure the location of the Image Service. Modify management interface address to match your storage node IP. Enable verbose logging.

       [DEFAULT]
      host = controller
      my_ip = MANAGEMENT_INTERFACE_IP verbose = true
    5. Target administartion service

      In the [DEFAULT] configure Cinder to use tgtadm service for iSCSI storage management:

       [DEFAULT]
      iscsi_helper = tgtadm

      Edit the /etc/tgt/targets.conf to include the cinder volumes. This will hold information about volume's location, CHAP credentials, IQNs, etc.

       include /etc/cinder/volumes/*
      		

Start the Block Storage and target service and configure them to be automatically started at boot time.

systemctl enable openstack-cinder-volume.service tgtd.service
systemctl start openstack-cinder-volume.service tgtd.service

Run this command on the controller node to ensure the Storage service is up and running.

cinder service-list
+------------------+--------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+--------+------+---------+-------+----------------------------+
| cinder-scheduler | hydra | nova | enabled | up | 2015-01-31T17:57:44.000000 |
| cinder-volume | cinder | nova | enabled | up | 2015-01-31T17:57:55.000000 |
+------------------+--------+------+---------+-------+----------------------------+

Tip: If you are able to attach cinder volumes from OpenStack, but the file system creation is taking too long or got stuck, try to disable the checksumming of your storage node VIF. Use ethtool -K vifz.0 tx off where z is the domain identifier of the storage VM.

Validate the OpenStack instance

You should go through this steps to validate your OpenStack environment. In the second part we will see how to deploy Cloud Foundry using BOSH and push our first application.

Deploying Cloud Foundry on OpenStack Juno and XenServer (Part I)的更多相关文章

  1. Deploying Cloud Foundry on OpenStack Juno and XenServer (Part II)

    link http://rabbitstack.github.io/deploying-cloud-foundry-on-openstack-juno-and-xenserver-part-ii/ L ...

  2. OpenStack Juno 版本发布——支持Spark和NFV[转]

    作者:郑晨,OpenStack中国社区,转载请注明出处 美国时间2014年10月16日,OpenStack Juno版本正式发布,这是OpenStack开源云计算项目自2010年创立以来的第10个版本 ...

  3. Docker与k8s的恩怨情仇(一)—成为PaaS前浪的Cloud Foundry

    转载请注明出处:葡萄城官网,葡萄城为开发者提供专业的开发工具.解决方案和服务,赋能开发者. 大家在工作中或许或多或少都接触过Docker,那你知道Docker以及容器化背后的原理到底是什么吗? 容器化 ...

  4. 基于Cloud Foundry平台部署nodejs项目上线

    Cloud Foundry(以下简称CF),CF是Vmware公司的PaaS服务平台,Paas(Platform as a Service,平台即服务), 是为开发者提供一个应用运行的平台,有了这人平 ...

  5. Cloud Foundry 在 Azure 中国正式发布

    Cloud Foundry 今天在 Azure 中国上正式发布了!这对于 Azure 平台,以及开源社区都是一个令人振奋的里程碑. Cloud Foundry 简化了云计算应用程序的构建,测试,发布和 ...

  6. 12月2日,上海Cloud Foundry Summit, Azure Cloud Foundry 团队期待和你见面!

    12月2日,上海Cloud Foundry Summit, Azure Cloud Foundry 团队期待和你见面! 12日2日对中国Cloud Foundry的用户和开源社区来说,是极有意义的一天 ...

  7. Cloud Foundry中gorouter对StickySession的支持

    Cloud Foundry作为业界出众的PaaS平台,在应用的可扩展性方面做得很优秀. 详细来讲,在一个应用须要横向伸展的时候,Cloud Foundry能够轻松地帮助用户做好伸展工作,也就是创建出一 ...

  8. Pivotal Cloud Foundry学习笔记(1)

    PCF是一个PAAS平台 注册PCF账号 https://account.run.pivotal.io/sign-up 安装cf CLI 访问 https://console.run.pivotal. ...

  9. Cloud Foundry中 JasperReports service集成

    Cloud Foundry作为业界第一个开源的PaaS解决方案,正越来越多的被业界接受和认可.随着PaaS的发展,Cloud Foundry顺应潮流,充分发挥开源项目的特点,到目前为止,已经支持了大批 ...

随机推荐

  1. C语言 · 拿糖果

    算法提高 拿糖果   时间限制:1.0s   内存限制:256.0MB      问题描述 妈妈给小B买了N块糖!但是她不允许小B直接吃掉. 假设当前有M块糖,小B每次可以拿P块糖,其中P是M的一个不 ...

  2. PHP 之超级全局变量

    参考菜鸟教程,并经过自己亲手实验,记录PHP的几个超级全局变量 所谓超级全局变量 ,你可以理解为在一个脚本里面的全部代码里面都可以使用的变量. $GLOBALS $GLOBALS 是 php 的一个超 ...

  3. (转) eclipse安装lombok

    lombok的官方网址:http://projectlombok.org/ 1. lombok的安装: 使用lombox是需要安装的,如果不安装,IDE则无法解析lombox注解,有两种方式可以安装l ...

  4. jQuery.Form插件介绍

    一.前言  jQuery From插件是一个优秀的Ajax表单插件,使用它可以让你非常容易地.无侵入地升级HTML表单以支持Ajax.jQuery From有两个主要方法:ajaxForm和ajaxS ...

  5. R语言扩展包dplyr笔记

    引言 2014年刚到, 就在 Feedly 订阅里看到 RStudio Blog 介绍 dplyr 包已发布 (Introducing dplyr), 此包将原本 plyr 包中的 ddply() 等 ...

  6. 10 个强大的JavaScript / jQuery 模板引擎推荐

    模板引擎是为了使用户界面与业务数据(内容)分离而产生的,它可以生成特定格式的文档.由于在开发过程中,网站或应用程序的界面与数据实现分离,大大提升了开发效率,良好的设计也使得代码重用变得更加容易. 本文 ...

  7. 《FPGA全程进阶---实战演练》第三章之PCB设计之去耦电容

    1.关于去耦电容为何需要就近摆放? 大多数资料有提到过,去耦电容就近放置,是从减小回路电感的角度去谈及摆放问题,其实还有一个原则就是去耦半径的问题,如果电容离着芯片位置较远,超过去耦半径,会起不到去耦 ...

  8. 关于Cocos2d-x中实例伸缩后的位置设置

    在有的时候觉得图片太大,会进行缩放,但是在设置位置的时候,用fire->getContentSize(),用的是它原来的大小,就会产生不能准确设置节点的现象 1.在设置伸缩比例的时候,记住比例值 ...

  9. Android多线程任务的优化1:AsyncTask的缺陷 (转至 http://www.linuxidc.com/Linux/2011-09/43150.htm)

    导语:在开发Android应用的过程中,我们需要时刻注意保障应用的稳定性和界面响应性,因为不稳定或者响应速度慢的应用将会给用户带来非常差的交互体验.在越来越讲究用户体验的大环境下,用户也许会因为应用的 ...

  10. springcloud(十):服务网关zuul初级篇

    前面的文章我们介绍了,Eureka用于服务的注册于发现,Feign支持服务的调用以及均衡负载,Hystrix处理服务的熔断防止故障扩散,Spring Cloud Config服务集群配置中心,似乎一个 ...