参考文档:

  1. Install-guide:https://docs.openstack.org/install-guide/
  2. OpenStack High Availability Guide:https://docs.openstack.org/ha-guide/index.html
  3. 理解Pacemaker:http://www.cnblogs.com/sammyliu/p/5025362.html
  4. Ceph: http://docs.ceph.com/docs/master/start/intro/

十.Nova控制节点集群

1. 创建nova相关数据库

  1. # 在任意控制节点创建数据库,后台数据自动同步,以controller01节点为例;
  2. # nova服务含4个数据库,统一授权到nova用户;
  3. # placement主要涉及资源统筹,较常用的api接口是获取备选资源与claim资源等
  4. [root@controller01 ~]# mysql -u root -pmysql_pass
  5.  
  6. MariaDB [(none)]> CREATE DATABASE nova_api;
  7. MariaDB [(none)]> CREATE DATABASE nova;
  8. MariaDB [(none)]> CREATE DATABASE nova_cell0;
  9. MariaDB [(none)]> CREATE DATABASE nova_placement;
  10.  
  11. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_dbpass';
  12. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';
  13.  
  14. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_dbpass';
  15. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';
  16.  
  17. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_dbpass';
  18. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';
  19.  
  20. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_placement.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_dbpass';
  21. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_placement.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';
  22.  
  23. MariaDB [(none)]> flush privileges;
  24. MariaDB [(none)]> exit; 

2. 创建nova/placement-api

  1. # 在任意控制节点操作,以controller01节点为例;
  2. # 调用nova相关服务需要认证信息,加载环境变量脚本即可
  3. [root@controller01 ~]# . admin-openrc

1)创建nova/plcement用户

  1. # service项目已在glance章节创建;
  2. # nova/placement用户在”default” domain中
  3. [root@controller01 ~]# openstack user create --domain default --password=nova_pass nova
  4. [root@controller01 ~]# openstack user create --domain default --password=placement_pass placement

2)nova/placement赋权

  1. # 为nova/placement用户赋予admin权限
  2. [root@controller01 ~]# openstack role add --project service --user nova admin
  3. [root@controller01 ~]# openstack role add --project service --user placement admin

3)创建nova/placement服务实体

  1. # nova服务实体类型”compute”;
  2. # placement服务实体类型”placement”
  3. [root@controller01 ~]# openstack service create --name nova --description "OpenStack Compute" compute
  4. [root@controller01 ~]# openstack service create --name placement --description "Placement API" placement

4)创建nova/placement-api

  1. # 注意--region与初始化admin用户时生成的region一致;
  2. # api地址统一采用vip,如果public/internal/admin分别使用不同的vip,请注意区分;
  3. # nova-api 服务类型为compute,placement-api服务类型为placement;
  4. # nova public api
  5. [root@controller01 ~]# openstack endpoint create --region RegionTest compute public http://controller:8774/v2.1

  1. # nova internal api
  2. [root@controller01 ~]# openstack endpoint create --region RegionTest compute internal http://controller:8774/v2.1

  1. # nova admin api
  2. [root@controller01 ~]# openstack endpoint create --region RegionTest compute admin http://controller:8774/v2.1

  1. # placement public api
  2. [root@controller01 ~]# openstack endpoint create --region RegionTest placement public http://controller:8778

  1. # placement internal api
  2. [root@controller01 ~]# openstack endpoint create --region RegionTest placement internal http://controller:8778

  1. # placement admin api
  2. [root@controller01 ~]# openstack endpoint create --region RegionTest placement admin http://controller:8778

3. 安装nova

  1. # 在全部控制节点安装nova相关服务,以controller01节点为例
  2. [root@controller01 ~]# yum install openstack-nova-api openstack-nova-conductor \
  3. openstack-nova-console openstack-nova-novncproxy \
  4. openstack-nova-scheduler openstack-nova-placement-api -y

4. 配置nova.conf

  1. # 在全部控制节点操作,以controller01节点为例;
  2. # 注意”my_ip”参数,根据节点修改;
  3. # 注意nova.conf文件的权限:root:nova
  4. [root@controller01 ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
  5. [root@controller01 ~]# egrep -v "^$|^#" /etc/nova/nova.conf
  6. [DEFAULT]
  7. my_ip=172.30.200.31
  8. use_neutron=true
  9. firewall_driver=nova.virt.firewall.NoopFirewallDriver
  10. enabled_apis=osapi_compute,metadata
  11. osapi_compute_listen=$my_ip
  12. osapi_compute_listen_port=8774
  13. metadata_listen=$my_ip
  14. metadata_listen_port=8775
  15. # 前端采用haproxy时,服务连接rabbitmq会出现连接超时重连的情况,可通过各服务与rabbitmq的日志查看;
    # transport_url=rabbit://openstack:rabbitmq_pass@controller:5673
  16. # rabbitmq本身具备集群机制,官方文档建议直接连接rabbitmq集群;但采用此方式时服务启动有时会报错,原因不明;如果没有此现象,强烈建议连接rabbitmq直接对接集群而非通过前端haproxy
  17. transport_url=rabbit://openstack:rabbitmq_pass@controller01:5672,controller02:5672,controller03:5672
  18. [api]
  19. auth_strategy=keystone
  20. [api_database]
  21. connection=mysql+pymysql://nova:nova_dbpass@controller/nova_api
  22. [barbican]
  23. [cache]
  24. backend=oslo_cache.memcache_pool
  25. enabled=True
  26. memcache_servers=controller01:11211,controller02:11211,controller03:11211
  27. [cells]
  28. [cinder]
  29. [compute]
  30. [conductor]
  31. [console]
  32. [consoleauth]
  33. [cors]
  34. [crypto]
  35. [database]
  36. connection = mysql+pymysql://nova:nova_dbpass@controller/nova
  37. [devices]
  38. [ephemeral_storage_encryption]
  39. [filter_scheduler]
  40. [glance]
  41. api_servers = http://controller:9292
  42. [guestfs]
  43. [healthcheck]
  44. [hyperv]
  45. [ironic]
  46. [key_manager]
  47. [keystone]
  48. [keystone_authtoken]
  49. auth_uri = http://controller:5000
  50. auth_url = http://controller:35357
  51. memcached_servers = controller01:11211,controller02:11211,controller03:11211
  52. auth_type = password
  53. project_domain_name = default
  54. user_domain_name = default
  55. project_name = service
  56. username = nova
  57. password = nova_pass
  58. [libvirt]
  59. [matchmaker_redis]
  60. [metrics]
  61. [mks]
  62. [neutron]
  63. [notifications]
  64. [osapi_v21]
  65. [oslo_concurrency]
  66. lock_path=/var/lib/nova/tmp
  67. [oslo_messaging_amqp]
  68. [oslo_messaging_kafka]
  69. [oslo_messaging_notifications]
  70. [oslo_messaging_rabbit]
  71. [oslo_messaging_zmq]
  72. [oslo_middleware]
  73. [oslo_policy]
  74. [pci]
  75. [placement]
  76. region_name = RegionTest
  77. project_domain_name = Default
  78. project_name = service
  79. auth_type = password
  80. user_domain_name = Default
  81. auth_url = http://controller:35357/v3
  82. username = placement
  83. password = placement_pass
  84. [quota]
  85. [rdp]
  86. [remote_debug]
  87. [scheduler]
  88. [serial_console]
  89. [service_user]
  90. [spice]
  91. [upgrade_levels]
  92. [vault]
  93. [vendordata_dynamic_auth]
  94. [vmware]
  95. [vnc]
  96. enabled=true
  97. server_listen=$my_ip
  98. server_proxyclient_address=$my_ip
  99. novncproxy_base_url=http://$my_ip:6080/vnc_auto.html
  100. novncproxy_host=$my_ip
  101. novncproxy_port=6080
  102. [workarounds]
  103. [wsgi]
  104. [xenserver]
  105. [xvp]

5. 配置00-nova-placement-api.conf

  1. # 在全部控制节点操作,以controller01节点为例;
  2. # 注意根据不同节点修改监听地址
  3. [root@controller01 ~]# cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak
  4. [root@controller01 ~]# sed -i "s/Listen\ 8778/Listen\ 172.30.200.31:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf
  5. [root@controller01 ~]# sed -i "s/*:8778/172.30.200.31:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf
  6. [root@controller01 ~]# echo "
  7.  
  8. #Placement API
  9. <Directory /usr/bin>
  10. <IfVersion >= 2.4>
  11. Require all granted
  12. </IfVersion>
  13. <IfVersion < 2.4>
  14. Order allow,deny
  15. Allow from all
  16. </IfVersion>
  17. </Directory>
  18. " >> /etc/httpd/conf.d/00-nova-placement-api.conf
  19.  
  20. # 重启httpd服务,启动placement-api监听端口
  21. [root@controller01 ~]# systemctl restart httpd

6. 同步nova相关数据库

1)同步nova相关数据库

  1. # 任意控制节点操作;
  2. # 同步nova-api数据库
  3. [root@controller01 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
  4.  
  5. # 注册cell0数据库
  6. [root@controller01 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  7.  
  8. # 创建cell1 cell
  9. [root@controller01 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
  10.  
  11. # 同步nova数据库;
  12. # 忽略”deprecated”信息
  13. [root@controller01 ~]# su -s /bin/sh -c "nova-manage db sync" nova

补充

此版本在向数据库同步导入数据表时,报错:/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported

exception.NotSupportedWarning

解决方案如下

bug:https://bugs.launchpad.net/nova/+bug/1746530

pacth:https://github.com/openstack/oslo.db/commit/c432d9e93884d6962592f6d19aaec3f8f66ac3a2

2)验证

  1. # cell0与cell1注册正确
  2. [root@controller01 ~]# nova-manage cell_v2 list_cells

  1. # 查看数据表
  2. [root@controller01 ~]# mysql -h controller01 -u nova -pnova_dbpass -e "use nova_api;show tables;"
  3. [root@controller01 ~]# mysql -h controller01 -u nova -pnova_dbpass -e "use nova;show tables;"
  4. [root@controller01 ~]# mysql -h controller01 -u nova -pnova_dbpass -e "use nova_cell0;show tables;"

7. 启动服务

  1. # 在全部控制节点操作,以controller01节点为例;
  2. # 开机启动
  3. [root@controller01 ~]# systemctl enable openstack-nova-api.service \
  4. openstack-nova-consoleauth.service \
  5. openstack-nova-scheduler.service \
  6. openstack-nova-conductor.service \
  7. openstack-nova-novncproxy.service
  8.  
  9. # 启动
  10. [root@controller01 ~]# systemctl restart openstack-nova-api.service
  11. [root@controller01 ~]# systemctl restart openstack-nova-consoleauth.service
  12. [root@controller01 ~]# systemctl restart openstack-nova-scheduler.service
  13. [root@controller01 ~]# systemctl restart openstack-nova-conductor.service
  14. [root@controller01 ~]# systemctl restart openstack-nova-novncproxy.service
  15.  
  16. # 查看状态
  17. [root@controller01 ~]# systemctl status openstack-nova-api.service \
  18. openstack-nova-consoleauth.service \
  19. openstack-nova-scheduler.service \
  20. openstack-nova-conductor.service \
  21. openstack-nova-novncproxy.service
  22.  
  23. # 查看端口
  24. [root@controller01 ~]# netstat -tunlp | egrep '8774|8775|8778|6080'

8. 验证

  1. [root@controller01 ~]# . admin-openrc
  2.  
  3. # 列出各服务组件,查看状态;
  4. # 也可使用命令” nova service-list”
  5. [root@controller01 ~]# openstack compute service list

  1. # 展示api端点
  2. [root@controller01 ~]# openstack catalog list

  1. # 检查cell与placement api运行正常
  2. [root@controller01 ~]# nova-status upgrade check

9. 设置pcs资源

  1. # 在任意控制节点操作;
  2. # 添加资源openstack-nova-api,openstack-nova-consoleauth,openstack-nova-scheduler,openstack-nova-conductor与openstack-nova-novncproxy
  3. [root@controller01 ~]# pcs resource create openstack-nova-api systemd:openstack-nova-api --clone interleave=true
  4. [root@controller01 ~]# pcs resource create openstack-nova-consoleauth systemd:openstack-nova-consoleauth --clone interleave=true
  5. [root@controller01 ~]# pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler --clone interleave=true
  6. [root@controller01 ~]# pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor --clone interleave=true
  7. [root@controller01 ~]# pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy --clone interleave=true
  8.  
  9. # 经验证,建议openstack-nova-api,openstack-nova-consoleauth,openstack-nova-conductor与openstack-nova-novncproxy 等无状态服务以active/active模式运行;
  10. # openstack-nova-scheduler等服务以active/passive模式运行
  11.  
  12. # 查看pcs资源;
  13. [root@controller01 ~]# pcs resource

高可用OpenStack(Queen版)集群-6.Nova控制节点集群的更多相关文章

  1. Nova控制节点集群

    #Nova控制节点集群 openstack pike 部署 目录汇总 http://www.cnblogs.com/elvi/p/7613861.html ##Nova控制节点集群 # control ...

  2. 高可用OpenStack(Queen版)集群-9.Cinder控制节点集群

    参考文档: Install-guide:https://docs.openstack.org/install-guide/ OpenStack High Availability Guide:http ...

  3. Neutron控制节点集群

    #Neutron控制节点集群 openstack pike 部署 目录汇总 http://www.cnblogs.com/elvi/p/7613861.html #.Neutron控制节点集群 #本实 ...

  4. cinder控制节点集群

    #cinder控制节点集群 openstack pike 部署 目录汇总 http://www.cnblogs.com/elvi/p/7613861.html #cinder块存储控制节点.txt.s ...

  5. 高可用OpenStack(Queen版)集群-3.高可用配置(pacemaker&haproxy)

    参考文档: Install-guide:https://docs.openstack.org/install-guide/ OpenStack High Availability Guide:http ...

  6. openstack高可用集群21-生产环境高可用openstack集群部署记录

    第一篇 集群概述 keepalived + haproxy +Rabbitmq集群+MariaDB Galera高可用集群   部署openstack时使用单个控制节点是非常危险的,这样就意味着单个节 ...

  7. Corosync+Pacemaker+DRBD+MySQL 实现高可用(HA)的MySQL集群

    大纲一.前言二.环境准备三.Corosync 安装与配置四.Pacemaker 安装与配置五.DRBD 安装与配置六.MySQL 安装与配置七.crmsh 资源管理 推荐阅读: Linux 高可用(H ...

  8. 高可用,完全分布式Hadoop集群HDFS和MapReduce安装配置指南

    原文:http://my.oschina.net/wstone/blog/365010#OSC_h3_13 (WJW)高可用,完全分布式Hadoop集群HDFS和MapReduce安装配置指南 [X] ...

  9. 用Kolla在阿里云部署10节点高可用OpenStack

    为展现 Kolla 的真正实力,我在阿里云使用 Ansible 自动创建 10 台虚机,部署一套多节点高可用 OpenStack 集群! 前言 上次 Kolla 已经表示了要打 10 个的愿望,这次我 ...

随机推荐

  1. BZOJ4891:[TJOI2017]龙舟(Pollard-Rho,exgcd)

    Description 加里敦大学有一个龙舟队,龙舟队有n支队伍,每只队伍有m个划手,龙舟比赛是一个集体项目,和每个人的能力息息相关,但由于龙舟讲究配合,所以评价队伍的能力的是一个值c = (b1*b ...

  2. 【转】 iOS播放视频时候,忽略设备静音按钮

    用户有时会在静音模式下观看视频,如果不主动设置的话,视频是没有声音的,通过AVAudioSession可开启以视频为主导的播放模式, 首先需要导入,AVFoundtion.framework,在控制播 ...

  3. PHP类的静态(static)方法和静态(static)变量使用介绍

    PHP类的静态(static)方法和静态(static)变量使用介绍,学习php的朋友可以看下     在php中,访问类的方法/变量有两种方法: 1. 创建对象$object = new Class ...

  4. centos下mysqlreport安装和使用

    首先查看你的机器是否安装了perl: #perl -v 显示版本号即表示已安装 然后: #yum install perl-DBD-mysql perl-DBI #yum install mysqlr ...

  5. Core WebAPI 入门

    官方文档地址 https://docs.microsoft.com/zh-cn/aspnet/?view=aspnetcore-2.2#pivot=core 使用 ASP.NET Core 构建 We ...

  6. criterions的选择

    criterions分为几类,其中有classification criterions与regression criterions.classification criterions是针对离散的,re ...

  7. Python之Web2py框架使用

    本文主要是对Web2py框架的介绍和安装使用. 一. 介绍 全栈式Web框架:Web2py是 Google 在 web.py 基础上二次开发而来的,兼容 Google App Engine .是一个为 ...

  8. MHA实践操作

    1.MHA部署解读: 1.1MHA Manager可以部署在一台slave上.MHA Manager探测集群的node节点,当发现master出现故障的时候,它可以自动将具有最新数据的slave提升为 ...

  9. struts2框架 转载 精华帖

    一.Struts2简介 参考<JavaEE 轻量级框架应用与开发—S2SH> Struts框架是流行广泛的一个MVC开源实现,而Struts2是Struts框架的新一代产品,是将Strut ...

  10. Ext4文件系统架构分析(一)

    本文描述Ext4文件系统磁盘布局和元数据的一些分析,同样适用于Ext3和Ext2文件系统,除了它们不支持的Ext4的特性外.整个分析分两篇博文,分别概述布局和详细介绍各个布局的数据结构及组织寻址方式等 ...