一、OpenStack简介

OpenStack是一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件开放源代码项目。
OpenStack是一个开源的云计算管理平台项目,由几个主要的组件组合起来完成具体工作。OpenStack支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供API以进行集成。
OpenStack是一个旨在为公共及私有云的建设与管理提供软件的开源项目。它的社区拥有超过130家企业及1350位开发者,这些机构与个人都将OpenStack作为基础设施即服务(IaaS)资源的通用前端。OpenStack项目的首要任务是简化云的部署过程并为其带来良好的可扩展性。本文希望通过提供必要的指导信息,帮助大家利用OpenStack前端来设置及管理自己的公共云或私有云。
OpenStack云计算平台,帮助服务商和企业内部实现类似于 Amazon EC2 和 S3 的云基础架构服务(Infrastructure as a Service, IaaS)。OpenStack 包含两个主要模块:Nova 和 Swift,前者是 NASA 开发的虚拟服务器部署和业务计算模块;后者是 Rackspace开发的分布式云存储模块,两者可以一起用,也可以分开单独用。OpenStack除了有 Rackspace 和 NASA 的大力支持外,还有包括 Dell、Citrix、 Cisco、 Canonical等重量级公司的贡献和支持,发展速度非常快,有取代另一个业界领先开源云平台 Eucalyptus 的态势。
openstack 各个服务名称对应
 

二、部署环境

1.主机信息

各角色描述及需求:

控制器:

  1. 控制节点运行身份认证服务,镜像服务,管理部分计算和网络服务,不同的网络代理和仪表盘。同样包括像SQL数据库,消息队列及 NTP这样的支撑服务。
  2. 可选的:可以在控制节点允许块存储,对象存储,OrchestrationTelemetry服务。
  3. 控制节点需要最少两块网卡。

计算:

  1. 计算节点运行操作实例的 :hypervisor计算部分。默认情况下使用 KVM 作为hypervisor。计算节点同样运行网络服务代理,用来连接实例到虚拟网络,通过:security groups 为实例提供防火墙服务。
  2. 这个服务可以部署超过1个计算节点。每个节点要求最少两个网络接口。

块设备存储:

  1. 该可选的块存储节点包含磁盘,块存储服务会向实例提供这些磁盘。
  2. 简单起见,计算节点和这个节点间的服务流量使用管理网络。生产环境中应该实施单独的存储网络以增强性能和安全。
  3. 这个服务可以部署超过一个块存储节点。每个节点要求至少一个网卡接口。

对象存储:

  1. 该可选的对象存储节点包含磁盘,对象存储服务用来存储账号,容器和对象。
  2. 简单起见,计算节点和这个节点间的服务流量使用管理网络。生产环境中应该实施单独的存储网络以增强性能和安全。
  3. 这个服务要求两个节点。每个节点要求最少一个网络接口。你可以部署超过两个对象存储节点。

网络:

  1. 从下面虚拟网络选项中选择一种。
  2. 网络选项1:提供者网络
  3. 提供者网络选项以最简单的方式部署OpenStack网络服务,可能包括二层服务(桥/交换机)服务、VLAN网络分段。本质上,它建立虚拟网络到物理网络的桥,
    依靠物理网络基础设施提供三层服务(路由)。使用DHCP为实例提供IP地址信息。
  4. 注:这个选项不支持自服务私有网络,3层(路由)服务和高级服务比如 LBaaS FWaaS。如果您希望有这些特性,考虑自服务网络选项。
  5. 网络选项2:自服务网络
  6. 自服务网络选项扩展提供者网络选项,三层网络服务启用 self-service`网络使用叠加分段方法,比如 VXLAN。本质上,它使用NAT路由虚拟网络到路由物理网络。
    额外地,这个选项提供高级服务的基础,比如LBaas和FWaaS。

2.域名解析和关闭防火墙 (所有机器上)

  1. /etc/hosts #主机名称设置后不可用修改
  2. 192.168.1.101 controller
  3. 192.168.1.102 compute1
  4. 192.168.1.103 block1
  5. 192.168.1.104 object1
  6. 192.168.1.105 object2
  7. 关闭 selinux
  8. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
  9. setenforce
  10. 关闭 iptables
  11. systemctl start firewalld.service
  12. systemctl stop firewalld.service
  13. systemctl disable firewalld.service

3.密码、时间同步及yum+epel源

  1. 密码:安装过程中涉及很多服务的密码,为了方便记忆统一为"",生产环境请勿设置
  2. 时间:参考文档 http://www.cnblogs.com/panwenbin-logs/p/8384340.html
  3. yum+epel源:建议使用国内的163或阿里yum
  4. OpenStack源:
  5. cat /etc/yum.repos.d/CentOS-OpenStack-liberty.repo
  6. [centos-openstack-liberty]
  7. name=CentOS- - OpenStack liberty
  8. baseurl=http://vault.centos.org/centos/7.3.1611/cloud/x86_64/openstack-liberty/
  9. gpgcheck=
  10. enabled=
  11. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Centos-7

4.升级安装包

  1. yum upgrade
  2. reboot #重启

5.安装 OpenStack 客户端

  1. yum install -y python-openstackclient
  2. yum install -y openstack-selinux #如果启用了 SELinux ,安装 openstack-selinux 包实现对OpenStack服务的安全策略进行自动管理

三、安装配置数据库服务(MySQL)

  1. [root@controller ~]# yum install -y mariadb mariadb-server MySQL-python
  2. [root@controller ~]# cp /usr/share/mariadb/my-medium.cnf /etc/my.cnf #或者是/usr/share/mysql/my-medium.cnf
  3. [root@controller ~]# vim /etc/my.cnf
  4. [mysqld]
  5. bind-address = 192.168.1.101
  6. default-storage-engine = innodb
  7. innodb_file_per_table
  8. collation-server = utf8_general_ci
  9. init-connect = 'SET NAMES utf8'
  10. character-set-server = utf8
  11. max_connections=
  12. [root@controller ~]# systemctl enable mariadb.service && systemctl start mariadb.service #启动数据库服务,并将其配置为开机自启
  13. [root@controller ~]# mysql_secure_installation #密码 123456,一路 y 回车

四、安装配置消息队列服务(rabbitmq)

  1. [root@controller ~]# yum install -y rabbitmq-server
  2. root@controller ~]# systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service
  3. [root@controller ~]# rabbitmqctl add_user openstack #添加 openstack 用户,密码123456
  4. [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" #给openstack用户配置写和读权限
    [root@controller ~]# rabbitmq-plugins list #查看支持的插件
    [root@controller ~]# rabbitmq-plugins enable rabbitmq_management #使用此插件实现 web 管理
    [root@linux-node1 ~]# systemctl restart rabbitmq-server.service
    [root@controller ~]# netstat -tnlp|grep beam
    tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 997/beam #管理端口
    tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 997/beam #server间内部通信口
    tcp6 0 0 :::5672 :::* LISTEN 997/beam #client端通信口

访问RabbitMQ,访问地址是http://192.168.1.101:15672/,默认用户名密码都是guest

退出guest用户,测试使用openstack用户登录是否成功

 五、安装和配置OpenStack身份认证服务(代码名称keystone。出于性能原因,这个配置部署Apache HTTP服务处理查询并使用Memcached存储tokens而不用SQL数据库。)

1.服务简述

  1. OpenStack:Identity service为认证管理,授权管理和服务目录服务管理提供单点整合。其它OpenStack服务将身份认证服务当做通用统一API来使用。此外,提供用户信息但是不在OpenStack项目中的服务(如LDAP服务)可被整合进先前存在的基础设施中。
  2. 为了从identity服务中获益,其他的OpenStack服务需要与它合作。当某个OpenStack服务收到来自用户的请求时,该服务询问Identity服务,验证该用户是否有权限进行此次请求
  3. 身份服务包含这些组件:
  4. 服务器
  5.   一个中心化的服务器使用RESTful 接口来提供认证和授权服务。
  6. 驱动
  7.   驱动或服务后端被整合进集中式服务器中。它们被用来访问OpenStack外部仓库的身份信息, 并且它们可能已经存在于OpenStack被部署在的基础设施(例如,SQL数据库或LDAP服务器)中。
  8. 模块
  9.   中间件模块运行于使用身份认证服务的OpenStack组件的地址空间中。这些模块拦截服务请求,取出用户凭据,并将它们送入中央是服务器寻求授权。中间件模块和OpenStack组件间的整合使用Python Web服务器网关接口。
  10. 当安装OpenStack身份服务,用户必须将之注册到其OpenStack安装环境的每个服务。身份服务才可以追踪那些OpenStack服务已经安装,以及在网络中定位它们。

2.服务需求:在配置 OpenStack 身份认证服务前,必须创建一个数据库及权限授权。

  1. [root@controller ~]# mysql -u root -p123456
  2. MariaDB [(none)]> CREATE DATABASE keystone;
  3. Query OK, row affected (0.00 sec)
  4. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '';
  5. Query OK, rows affected (0.01 sec)
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '';
  7. Query OK, rows affected (0.00 sec)
  8. MariaDB [(none)]> show databases; #查看数据库是否创建成功
  9. +--------------------+
  10. | Database |
  11. +--------------------+
  12. | information_schema |
  13. | keystone |
  14. | mysql |
  15. | performance_schema |
  16. +--------------------+
  17. MariaDB [(none)]> select User,Password,Host from mysql.user where User like "keystone"; #查看授权
  18. +----------+-------------------------------------------+-----------+
  19. | User | Password | Host |
  20. +----------+-------------------------------------------+-----------+
  21. | keystone | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | % |
  22. | keystone | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | localhost |
  23. +----------+-------------------------------------------+-----------+
  24. MariaDB [(none)]> \q
  25. Bye

3.服务安装

  1. [root@controller ~]#yum install openstack-keystone httpd mod_wsgi memcached python-memcached -y
  2. [root@controller ~]#systemctl enable memcached.service && systemctl start memcached.service
  3. [root@controller ~]# netstat -tnlp|grep memcached
  4. tcp 127.0.0.1: 0.0.0.0:* LISTEN /memcached
  5. tcp6 ::: :::* LISTEN /memcached
  6. [root@controller ~]# openssl rand -hex #创建管理员令牌
  7. c5a232c9b4bba9eea176
  8. [root@controller ~]# grep "^[a-z]" -B /etc/keystone/keystone.conf
  9. [DEFAULT]
  10. admin_token = db771afcb68c09caee6d #与上面生成的管理员令牌一致
  11. [database]
  12. connection = mysql://keystone:123456@controller/keystone #配置数据库访问地址
  13. [memcache]
  14. servers = localhost: #配置Memcached服务访问地址
  15. [revoke]
  16. driver = sql #配置SQL 回滚驱动
  17. [token]
  18. provider = uuid #配置 UUID token provider 和Memcached 驱动
  19. driver = memcache
  20. [root@controller ~]#su -s /bin/sh -c "keystone-manage db_sync" keystone #初始化身份认证服务的数据库
  21. [root@controller ~]# tail /var/log/keystone/keystone.log #查看日志是否有错误,
  22. -- ::08.343 INFO migrate.versioning.api [-] -> ...
  23. -- ::08.406 INFO migrate.versioning.api [-] done
  24. -- ::08.407 INFO migrate.versioning.api [-] -> ...
  25. -- ::08.565 INFO migrate.versioning.api [-] done
  26. -- ::08.565 INFO migrate.versioning.api [-] -> ...
  27. -- ::08.600 INFO migrate.versioning.api [-] done
  28. -- ::08.620 INFO migrate.versioning.api [-] -> ...
  29. -- ::08.667 INFO migrate.versioning.api [-] done
  30. -- ::08.667 INFO migrate.versioning.api [-] -> ...
  31. -- ::08.813 INFO migrate.versioning.api [-] done

配置 Apache HTTP 服务器

  1. [root@controller ~]# grep -n "^ServerName" /etc/httpd/conf/httpd.conf #配置 ServerName 选项为控制节点
  2. :ServerName controller
  3. [root@controller ~]# vim /etc/httpd/conf.d/wsgi-keystone.conf
  4. Listen
  5. Listen
  6. <VirtualHost *:>
  7. WSGIDaemonProcess keystone-public processes= threads= user=keystone group=keystone display-name=%{GROUP}
  8. WSGIProcessGroup keystone-public
  9. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  10. WSGIApplicationGroup %{GLOBAL}
  11. WSGIPassAuthorization On
  12. <IfVersion >= 2.4>
  13. ErrorLogFormat "%{cu}t %M"
  14. </IfVersion>
  15. ErrorLog /var/log/httpd/keystone-error.log
  16. CustomLog /var/log/httpd/keystone-access.log combined
  17. <Directory /usr/bin>
  18. <IfVersion >= 2.4>
  19. Require all granted
  20. </IfVersion>
  21. <IfVersion < 2.4>
  22. Order allow,deny
  23. Allow from all
  24. </IfVersion>
  25. </Directory>
  26. </VirtualHost>
  27. <VirtualHost *:>
  28. WSGIDaemonProcess keystone-admin processes= threads= user=keystone group=keystone display-name=%{GROUP}
  29. WSGIProcessGroup keystone-admin
  30. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
  31. WSGIApplicationGroup %{GLOBAL}
  32. WSGIPassAuthorization On
  33. <IfVersion >= 2.4>
  34. ErrorLogFormat "%{cu}t %M"
  35. </IfVersion>
  36. ErrorLog /var/log/httpd/keystone-error.log
  37. CustomLog /var/log/httpd/keystone-access.log combined
  38. <Directory /usr/bin>
  39. <IfVersion >= 2.4>
  40. Require all granted
  41. </IfVersion>
  42. <IfVersion < 2.4>
  43. Order allow,deny
  44. Allow from all
  45. </IfVersion>
  46. </Directory>
  47. </VirtualHost>
  48. [root@controller ~]# systemctl enable httpd.service && systemctl start httpd.service #启动 Apache HTTP 服务并配置其随系统启动
  49. [root@controller ~]# netstat -tnlp|grep httpd
  50. tcp6 ::: :::* LISTEN /httpd
  51. tcp6 ::: :::* LISTEN /httpd #用于管理, 只有admin_role可以使用
    tcp6 ::: :::* LISTEN /httpd #用于业务,普通用户使用

创建服务实体和API端点

  1. [root@controller ~]# export OS_URL=http://controller:35357/v3 #配置端点URL
  2. [root@controller ~]# export OS_IDENTITY_API_VERSION= #配置认证 API 版本
  3. [root@controller ~]# export OS_TOKEN=db771afcb68c09caee6d #配置认证令牌
  4. [root@controller ~]# env|grep ^OS #查看设置是否生效
  5. OS_IDENTITY_API_VERSION=
  6. OS_TOKEN=db771afcb68c09caee6d
  7. OS_URL=http://controller:35357/v3
  8. Openstack环境中,认证服务管理服务目录。服务使用这个目录来决定您的环境中可用的服务。
  9. [root@controller ~]# openstack service create --name keystone --description "OpenStack Identity" identity #为身份认证服务创建服务实体
  10. +-------------+----------------------------------+
  11. | Field | Value |
  12. +-------------+----------------------------------+
  13. | description | OpenStack Identity |
  14. | enabled | True |
  15. | id | 351c5f4d5174430eacb38b16a6403d40 |
  16. | name | keystone |
  17. | type | identity |
  18. +-------------+----------------------------------+
  19. 身份认证服务管理了一个与环境相关的 API 端点的目录。服务使用这个目录来决定如何与您环境中的其他服务进行通信。
  20. OpenStack使用三个API端点变种代表每种服务:admininternalpublic。默认情况下,管理API端点允许修改用户和租户而公共和内部APIs不允许这些操作。
    在生产环境中,处于安全原因,变种为了服务不同类型的用户可能驻留在单独的网络上。对实例而言,公共API网络为了让顾客管理他们自己的云在互联网上是可见的。
    管理API网络在管理云基础设施的组织中操作也是有所限制的。内部API网络可能会被限制在包含OpenStack服务的主机上。此外,OpenStack支持可伸缩性的多区域。
    [root@controller ~]# openstack endpoint create --region RegionOne identity public http://controller:5000/v2.0 #创建认证服务的 API 端点
  21. +--------------+----------------------------------+
  22. | Field | Value |
  23. +--------------+----------------------------------+
  24. | enabled | True |
  25. | id | 1ee55eac378f4d179bacb4ea3d1850d1 |
  26. | interface | public |
  27. | region | RegionOne |
  28. | region_id | RegionOne |
  29. | service_id | 351c5f4d5174430eacb38b16a6403d40 |
  30. | service_name | keystone |
  31. | service_type | identity |
  32. | url | http://controller:5000/v2.0 |
  33. +--------------+----------------------------------+
  34. [root@controller ~]# openstack endpoint create --region RegionOne identity internal http://controller:5000/v2.0
  35. +--------------+----------------------------------+
  36. | Field | Value |
  37. +--------------+----------------------------------+
  38. | enabled | True |
  39. | id | 00da46788e874f529f67046226c7b0c9 |
  40. | interface | internal |
  41. | region | RegionOne |
  42. | region_id | RegionOne |
  43. | service_id | 351c5f4d5174430eacb38b16a6403d40 |
  44. | service_name | keystone |
  45. | service_type | identity |
  46. | url | http://controller:5000/v2.0 |
  47. +--------------+----------------------------------+
  48. [root@controller ~]# openstack endpoint create --region RegionOne identity admin http://controller:35357/v2.0
  49. +--------------+----------------------------------+
  50. | Field | Value |
  51. +--------------+----------------------------------+
  52. | enabled | True |
  53. | id | fab8917d632a4a8c8ccb4290cbd382c6 |
  54. | interface | admin |
  55. | region | RegionOne |
  56. | region_id | RegionOne |
  57. | service_id | 351c5f4d5174430eacb38b16a6403d40 |
  58. | service_name | keystone |
  59. | service_type | identity |
  60. | url | http://controller:35357/v2.0 |
  61. +--------------+----------------------------------+
  62. 注:每个添加到OpenStack环境中的服务要求一个或多个服务实体和三个认证服务中的API 端点变种。
  63. 为进行管理操作,创建管理的项目、用户和角色
  64. [root@controller ~]# openstack project create --domain default --description "Admin Project" admin #创建 admin 项目
  65. +-------------+----------------------------------+
  66. | Field | Value |
  67. +-------------+----------------------------------+
  68. | description | Admin Project |
  69. | domain_id | default |
  70. | enabled | True |
  71. | id | 839cdfc946e1491c8004e3b732d17f9a |
  72. | is_domain | False |
  73. | name | admin |
  74. | parent_id | None |
  75. +-------------+----------------------------------+
  76. [root@controller ~]# openstack user create --domain default --password-prompt admin #创建 admin 用户
  77. User Password: #密码设置为123456
  78. Repeat User Password:
  79. +-----------+----------------------------------+
  80. | Field | Value |
  81. +-----------+----------------------------------+
  82. | domain_id | default |
  83. | enabled | True |
  84. | id | d4f0c9b24be84306960e29a7961d22a3 |
  85. | name | admin |
  86. +-----------+----------------------------------+
  87. [root@controller ~]# openstack role create admin #创建 admin 角色
  88. +-------+----------------------------------+
  89. | Field | Value |
  90. +-------+----------------------------------+
  91. | id | ebab14b851254fe69abb49132f3b76a2 |
  92. | name | admin |
  93. +-------+----------------------------------+
  94. [root@controller ~]# openstack role add --project admin --user admin admin #添加 admin 角色到 admin 项目和用户上,这个命令执行后没有输出
  95. 每个服务包含独有用户的service 项目。创建``service``项目
  96. [root@controller ~]# openstack project create --domain default --description "Service Project" service
  97. +-------------+----------------------------------+
  98. | Field | Value |
  99. +-------------+----------------------------------+
  100. | description | Service Project |
  101. | domain_id | default |
  102. | enabled | True |
  103. | id | cfbdca3af1a043d8ace0f47724312e60 |
  104. | is_domain | False |
  105. | name | service |
  106. | parent_id | None |
  107. +-------------+----------------------------------+
  108. 常规任务应该使用无特权的项目和用户,作为示例,创建一个demo项目和用户
  109. [root@controller ~]# openstack project create --domain default --description "Demo Project" demo #创建demo 项目,当为这个项目创建额外用户时,不要重复这一步。
  110. +-------------+----------------------------------+
  111. | Field | Value |
  112. +-------------+----------------------------------+
  113. | description | Demo Project |
  114. | domain_id | default |
  115. | enabled | True |
  116. | id | 2003811a2ad548e7b686f06a55fe9ce9 |
  117. | is_domain | False |
  118. | name | demo |
  119. | parent_id | None |
  120. +-------------+----------------------------------+
  121. [root@controller ~]# openstack user create --domain default --password-prompt demo #创建 demo 用户
  122. User Password:
  123. Repeat User Password:
  124. +-----------+----------------------------------+
  125. | Field | Value |
  126. +-----------+----------------------------------+
  127. | domain_id | default |
  128. | enabled | True |
  129. | id | d4ffbeefe72d412187047a79e3a51d00 |
  130. | name | demo |
  131. +-----------+----------------------------------+
  132. [root@controller ~]# openstack role create user #创建 user 角色
  133. +-------+----------------------------------+
  134. | Field | Value |
  135. +-------+----------------------------------+
  136. | id | a1b9a999563544daa808e5ee1e0edaf0 |
  137. | name | user |
  138. +-------+----------------------------------+
  139. [root@controller ~]# openstack role add --project demo --user demo user #添加 user 角色到 demo 项目和用户 ,你可以重复此过程来创建额外的项目和用户。

验证操作

  1. [root@controller ~]# vim /usr/share/keystone/keystone-dist-paste.ini#因为安全性的原因,关闭临时认证令牌机制,删除 以下三个段中 admin_token_auth字段
  2. [pipeline:public_api]
  3. [pipeline:admin_api]
  4. [pipeline:api_v3]
  5. [root@controller ~]# unset OS_TOKEN OS_URL #重置OS_TOKEN和OS_URL环境变量
  6. [root@controller ~]# openstack --os-auth-url http://controller:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue #使用 admin 用户,请求认证令牌,密码为123456
  7. Password:
  8. +------------+----------------------------------+
  9. | Field | Value |
  10. +------------+----------------------------------+
  11. | expires | --03T15::.805097Z |
  12. | id | ed30245e370648a185539a970e6c9e19 |
  13. | project_id | 839cdfc946e1491c8004e3b732d17f9a |
  14. | user_id | d4f0c9b24be84306960e29a7961d22a3 |
  15. +------------+----------------------------------+
  16. [root@controller ~]# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name demo --os-username demo --os-auth-type password token issue #使用 demo 用户,请求认证令牌
  17. Password:
  18. +------------+----------------------------------+
  19. | Field | Value |
  20. +------------+----------------------------------+
  21. | expires | --03T15::.135574Z |
  22. | id | a9c52f8f92804a81b7d0c6b5496a8ee3 |
  23. | project_id | 2003811a2ad548e7b686f06a55fe9ce9 |
  24. | user_id | d4ffbeefe72d412187047a79e3a51d00 |
  25. +------------+----------------------------------+
  26. 前面我们使用环境变量和命令选项的组合通过openstack客户端与身份认证服务交互。为了提升客户端操作的效率,OpenStack支持简单的客户端环境变量脚本即OpenRC 文件
  27. 创建 admin ``demo``项目和用户创建客户端环境变量脚本,为客户端操作加载合适的的凭证。
  28. [root@controller ~]# cat admin-openrc.sh #编辑文件 admin-openrc.sh 并添加如下内容
  29. export OS_PROJECT_DOMAIN_ID=default
  30. export OS_USER_DOMAIN_ID=default
  31. export OS_PROJECT_NAME=admin
  32. export OS_TENANT_NAME=admin
  33. export OS_USERNAME=admin
  34. export OS_PASSWORD=
  35. export OS_AUTH_URL=http://controller:35357/v3
  36. export OS_IDENTITY_API_VERSION=
  37. [root@controller ~]# cat demo-openrc.sh #编辑文件 demo-openrc.sh 并添加如下内容
  38. export OS_PROJECT_DOMAIN_ID=default
  39. export OS_USER_DOMAIN_ID=default
  40. export OS_PROJECT_NAME=demo
  41. export OS_TENANT_NAME=demo
  42. export OS_USERNAME=demo
  43. export OS_PASSWORD=
  44. export OS_AUTH_URL=http://controller:5000/v3
  45. export OS_IDENTITY_API_VERSION=
  46. [root@controller ~]# source admin-openrc.sh #加载admin-openrc.sh文件来身份认证服务的环境变量位置和admin项目和用户证书
  47. [root@controller ~]# openstack token issue #请求认证令牌信息
  48. +------------+----------------------------------+
  49. | Field | Value |
  50. +------------+----------------------------------+
  51. | expires | --03T15::.249772Z |
  52. | id | 48602913c79046f69d4db4ce7645b61b |
  53. | project_id | 839cdfc946e1491c8004e3b732d17f9a |
  54. | user_id | d4f0c9b24be84306960e29a7961d22a3 |
  55. +------------+----------------------------------+
  56. [root@controller ~]# source demo-openrc.sh #同上
  57. [root@controller ~]# openstack token issue
  58. +------------+----------------------------------+
  59. | Field | Value |
  60. +------------+----------------------------------+
  61. | expires | --03T15::.666144Z |
  62. | id | 9f3a4ff3239f418c8c000e712b42b216 |
  63. | project_id | 2003811a2ad548e7b686f06a55fe9ce9 |
  64. | user_id | d4ffbeefe72d412187047a79e3a51d00 |
  65. +------------+----------------------------------+

 

六、添加镜像服务

OpenStack 的镜像服务 (glance) 允许用户发现、注册和恢复虚拟机镜像。它提供了一个 REST API,允许您查询虚拟机镜像的 metadata 并恢复一个实际的镜像。您可以存储虚拟机镜像通过不同位置的镜像服务使其可用,就像 OpenStack 对象存储那样从简单的文件系统到对象存储系统。

1.服务简述

  1. 镜像服务 (glance) 允许用户发现、注册和获取虚拟机镜像。它提供了一个 REST API,允许您查询虚拟机镜像的 metadata 并获取一个现存的镜像。您可以将虚拟机镜像存储到各种位置,从简单的文件系统到对象存储系统—-例如 OpenStack 对象存储, 并通过镜像服务使用。
  2. OpenStack镜像服务是IaaS的核心服务。它接受磁盘镜像或服务器镜像API请求,和来自终端用户或OpenStack计算组件的元数据定义。它也支持包括OpenStack对象存储在内的多种类型仓库上的磁盘镜像或服务器镜像存储。
  3. 大量周期性进程运行于OpenStack镜像服务上以支持缓存。同步复制(Replication)服务保证集群中的一致性和可用性。其它周期性进程包括auditors, updaters, reapers
  4. OpenStack镜像服务包括以下组件:
  5. glance-api
  6.   接收镜像API的调用,诸如镜像发现、恢复、存储。
  7. glance-registry
  8.   存储、处理和恢复镜像的元数据,元数据包括项诸如大小和类型。
  9.   glance-registry是私有内部服务,用于服务OpenStack Image服务。不要向用户暴露该服务
  10. 数据库
  11.   存放镜像元数据,用户是可以依据个人喜好选择数据库的,多数的部署使用MySQLSQLite
  12. 镜像文件的存储仓库
  13.   支持多种类型的仓库,它们有普通文件系统、对象存储、RADOS块设备、HTTP、以及亚马逊S3。记住,其中一些仓库仅支持只读方式使用。
  14. 元数据定义服务
  15.   通用的API,是用于为厂商,管理员,服务,以及用户自定义元数据。这种元数据可用于不同的资源,例如镜像,工件,卷,配额以及集合。一个定义包括了新属性的键,描述,约束以及可以与之关联的资源的类型。

2.部署需求:安装和配置镜像服务之前,必须创建创建一个数据库、服务凭证和API端点。

  1. [root@controller ~]# mysql -u root -p123456 #创建数据并授权
  2. MariaDB [(none)]> CREATE DATABASE glance;
  3. Query OK, row affected (0.00 sec)
  4. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '';
  5. Query OK, rows affected (0.01 sec)
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '';
  7. Query OK, rows affected (0.00 sec)
  8. MariaDB [(none)]> \q
  9. Bye
  1. [root@controller ~]# source admin-openrc.sh #获得 admin 凭证来获取只有管理员能执行命令的访问权限
  2. [root@controller ~]# openstack user create --domain default --password-prompt glance #创建 glance 用户
  3. User Password: #密码为123456
  4. Repeat User Password:
  5. +-----------+----------------------------------+
  6. | Field | Value |
  7. +-----------+----------------------------------+
  8. | domain_id | default |
  9. | enabled | True |
  10. | id | 87a0389545e54e6697db202744c736b6 |
  11. | name | glance |
  12. +-----------+----------------------------------+
  13. [root@controller ~]# openstack role add --project service --user glance admin #添加 admin 角色到 glance 用户和 service 项目上,命令没有输出
  14. [root@controller ~]# openstack service create --name glance --description "OpenStack Image service" image #创建glance服务实体
  15. +-------------+----------------------------------+
  16. | Field | Value |
  17. +-------------+----------------------------------+
  18. | description | OpenStack Image service |
  19. | enabled | True |
  20. | id | b4c7005fde9b4c0085e2fc5874f02f34 |
  21. | name | glance |
  22. | type | image |
  23. +-------------+----------------------------------+
  24. 创建镜像服务的 API 端点
  25. [root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
  26. +--------------+----------------------------------+
  27. | Field | Value |
  28. +--------------+----------------------------------+
  29. | enabled | True |
  30. | id | 589466fdddf447b9b7e273954c2b7987 |
  31. | interface | public |
  32. | region | RegionOne |
  33. | region_id | RegionOne |
  34. | service_id | b4c7005fde9b4c0085e2fc5874f02f34 |
  35. | service_name | glance |
  36. | service_type | image |
  37. | url | http://controller:9292 |
  38. +--------------+----------------------------------+
  39. [root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
  40. +--------------+----------------------------------+
  41. | Field | Value |
  42. +--------------+----------------------------------+
  43. | enabled | True |
  44. | id | f67a5c559caf4580aee84304d1a2f37d |
  45. | interface | internal |
  46. | region | RegionOne |
  47. | region_id | RegionOne |
  48. | service_id | b4c7005fde9b4c0085e2fc5874f02f34 |
  49. | service_name | glance |
  50. | service_type | image |
  51. | url | http://controller:9292 |
  52. +--------------+----------------------------------+
  53. [root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
  54. +--------------+----------------------------------+
  55. | Field | Value |
  56. +--------------+----------------------------------+
  57. | enabled | True |
  58. | id | fb54cd8ff23b4ea0872f1a5db7182d8e |
  59. | interface | admin |
  60. | region | RegionOne |
  61. | region_id | RegionOne |
  62. | service_id | b4c7005fde9b4c0085e2fc5874f02f34 |
  63. | service_name | glance |
  64. | service_type | image |
  65. | url | http://controller:9292 |
  66. +--------------+----------------------------------+

3.服务安装

  1. [root@controller ~]# yum install -y openstack-glance python-glance python-glanceclient
  2. [root@controller neutron]# grep "^[a-z]" -B /etc/glance/glance-api.conf #编辑/etc/glance/glance-api.conf
  3. [DEFAULT]
  4. notification_driver = noop #配置 noop 禁用通知,因为他们只适合与可选的Telemetry 服务
  5. verbose = True
  6. [database]
  7. connection = mysql://glance:123456@controller/glance #配置数据库访问地址
  8. [glance_store]
  9. default_store = file #配置本地文件系统存储和镜像文件位置
  10. filesystem_store_datadir = /var/lib/glance/images/
  11. [keystone_authtoken] #配置认证服务访问信息,在 [keystone_authtoken] 中注释或者删除其他选项
  12. auth_uri = http://controller:5000
  13. auth_url = http://controller:35357
  14. auth_plugin = password
  15. project_domain_id = default
  16. user_domain_id = default
  17. project_name = service
  18. username = glance
  19. password =
  20. [paste_deploy]
  21. flavor = keystone #配置认证服务访问
  22. [root@controller neutron]# grep "^[a-z]" -B /etc/glance/glance-registry.conf #编辑/etc/glance/glance-registry.conf
  23. [DEFAULT]
  24. notification_driver = noop #配置 noop 禁用通知,因为他们只适合与可选的Telemetry 服务
  25. verbose = True
  26. [database]
  27. connection = mysql://glance:123456@controller/glance
  28. [keystone_authtoken] #配置认证服务访问信息,在 [keystone_authtoken] 中注释或者删除其他选项
  29. auth_uri = http://controller:5000
  30. auth_url = http://controller:35357
  31. auth_plugin = password
  32. project_domain_id = default
  33. user_domain_id = default
  34. project_name = service
  35. username = glance
  36. password =
  37. [paste_deploy]
  38. flavor = keystone #配置认证服务访问
  39. [root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance #将配置写入镜像服务数据库
  40. [root@controller yum.repos.d]# tail /var/log/glance/api.log
  41. -- ::34.439 INFO migrate.versioning.api [-] -> ...
  42. -- ::34.468 INFO glance.db.sqlalchemy.migrate_repo.schema [-] creating table artifacts
  43. -- ::34.567 INFO glance.db.sqlalchemy.migrate_repo.schema [-] creating table artifact_tags
  44. -- ::34.978 INFO glance.db.sqlalchemy.migrate_repo.schema [-] creating table artifact_properties
  45. -- ::35.054 INFO glance.db.sqlalchemy.migrate_repo.schema [-] creating table artifact_blobs
  46. -- ::35.211 INFO glance.db.sqlalchemy.migrate_repo.schema [-] creating table artifact_blob_locations
  47. -- ::35.339 INFO glance.db.sqlalchemy.migrate_repo.schema [-] creating table artifact_dependencies
  48. -- ::35.542 INFO migrate.versioning.api [-] done
  49. -- ::35.542 INFO migrate.versioning.api [-] -> ...
  50. -- ::36.271 INFO migrate.versioning.api [-] done
  51. [root@controller yum.repos.d]# systemctl enable openstack-glance-api.service openstack-glance-registry.service #启动镜像服务、配置他们随机启动
  52. [root@controller yum.repos.d]# systemctl start openstack-glance-api.service openstack-glance-registry.service
  53. [root@controller ~]# netstat -tnlp|grep python
  54. tcp 0.0.0.0: 0.0.0.0:* LISTEN /python2 #glance-api
  55. tcp 0.0.0.0: 0.0.0.0:* LISTEN /python2 #glance-registry
  56. 验证操作
  57. [root@controller ~]# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh #在每个客户端脚本中,配置镜像服务客户端使用2.0的API
  58. export OS_IMAGE_API_VERSION=
  59. [root@controller ~]# source admin-openrc.sh #获得 admin 凭证来获取只有管理员能执行命令的访问权限
  60. [root@controller ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img #下载测试源镜像
  61. [root@controller ~]# glance image-create --name "cirros" --file cirros-0.3.-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
    #使用 QCOW2 磁盘格式, bare 容器格式上传镜像到镜像服务并设置公共可见,这样所有的项目都可以访问它
  62. [=============================>] %
  63. +------------------+--------------------------------------+
  64. | Property | Value |
  65. +------------------+--------------------------------------+
  66. | checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
  67. | container_format | bare |
  68. | created_at | --04T11::48Z |
  69. | disk_format | qcow2 |
  70. | id | 936bce27-085b-4d79-8cce-68cff70d7abd |
  71. | min_disk | |
  72. | min_ram | |
  73. | name | cirros |
  74. | owner | 839cdfc946e1491c8004e3b732d17f9a |
  75. | protected | False |
  76. | size | |
  77. | status | active |
  78. | tags | [] |
  79. | updated_at | --04T11::49Z |
  80. | virtual_size | None |
  81. | visibility | public |
  82. +------------------+--------------------------------------+
  83. [root@controller ~]# glance image-list #确认镜像的上传并验证属性
  84. +--------------------------------------+--------+
  85. | ID | Name |
  86. +--------------------------------------+--------+
  87. | 936bce27-085b-4d79-8cce-68cff70d7abd | cirros |
  88. +--------------------------------------+--------+

七、安装和配置 Compute 服务,即 nova

1.服务简述

  1. 使用OpenStack计算服务来托管和管理云计算系统。OpenStack计算服务是基础设施即服务(IaaS)系统的主要部分,模块主要由Python实现。
  2. OpenStack计算组件请求OpenStack Identity服务进行认证;请求OpenStack Image服务提供磁盘镜像;为OpenStack dashboard提供用户与管理员接口。磁盘镜像访问限制在项目与用户上;配额以每个项目进行设定(例如,每个项目下可以创建多少实例)。OpenStack组件可以在标准硬件上水平大规模扩展,并且下载磁盘镜像启动虚拟机实例。
  3. OpenStack计算服务由下列组件所构成:
  4. nova-api 服务
  5.   接收和响应来自最终用户的计算API请求。此服务支持OpenStack计算服务APIAmazon EC2 API,以及特殊的管理API用于赋予用户做一些管理的操作。它会强制实施一些规则,发起多数的编排活动,例如运行一个实例。
  6. nova-api-metadata 服务
  7.   接受来自虚拟机发送的元数据请求。``nova-api-metadata``服务一般在安装``nova-network``服务的多主机模式下使用。更详细的信息,请参考OpenStack管理员手册中的链接`Metadata service <http://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service>`__ in the OpenStack Administrator Guide
  8. nova-compute服务
  9.   一个持续工作的守护进程,通过HyperviorAPI来创建和销毁虚拟机实例。例如:
  10.   1.XenServer/XCP XenAPI
  11.   2.KVM QEMU libvirt
  12.   3.VMware VMwareAPI
  13.   过程是蛮复杂的。最为基本的,守护进程同意了来自队列的动作请求,转换为一系列的系统命令如启动一个KVM实例,然后,到数据库中更新它的状态。
  14. nova-scheduler服务
  15.   拿到一个来自队列请求虚拟机实例,然后决定那台计算服务器主机来运行它。
  16. nova-conductor模块
  17.   媒介作用于``nova-compute``服务与数据库之间。它排除了由``nova-compute``服务对云数据库的直接访问。nova-conductor模块可以水平扩展。但是,不要将它部署在运行nova-compute服务的主机节点上。参考Configuration Reference Guide <http://docs.openstack.org/mitaka/config-reference/compute/conductor.html>`__。
  18. nova-cert模块
  19.   服务器守护进程向Nova Cert服务提供X509证书。用来为euca-bundle-image生成证书。仅仅是在EC2 API的请求中使用
  20. nova-network worker 守护进程
  21.   nova-comput`服务类似,从队列中接受网络任务,并且操作网络。执行任务例如创建桥接的接口或者改变IPtables的规则。
  22. nova-consoleauth 守护进程
  23.   授权控制台代理所提供的用户令牌。详情可查看nova-novncproxy和 nova-xvpvncproxy。该服务必须为控制台代理运行才可奏效。在集群配置中你可以运行二者中任一代理服务而非仅运行一个nova-consoleauth服务。更多关于nova-consoleauth的信息,请查看`About nova-consoleauth <http://docs.openstack.org/admin-guide/compute-remote-console-access.html#about-nova-consoleauth>`__。
  24. nova-novncproxy 守护进程
  25.   提供一个代理,用于访问正在运行的实例,通过VNC协议,支持基于浏览器的novnc客户端。
  26. nova-spicehtml5proxy 守护进程
  27.   提供一个代理,用于访问正在运行的实例,通过 SPICE 协议,支持基于浏览器的 HTML5 客户端。
  28. nova-xvpvncproxy 守护进程
  29.   提供一个代理,用于访问正在运行的实例,通过VNC协议,支持OpenStack特定的Java客户端。
  30. nova-cert 守护进程
  31.   X509 证书。
  32. nova客户端
  33.   用于用户作为租户管理员或最终用户来提交命令。
  34. 队列
  35.   一个在守护进程间传递消息的中央集线器。常见实现有RabbitMQ Zero MQ AMQP消息队列。
  36. SQL数据库
  37.   存储构建时和运行时的状态,为云基础设施,包括有:
  38.   1.可用实例类型
  39.   2.使用中的实例
  40.   3.可用网络
  41.   4.项目
  42. 理论上,OpenStack计算可以支持任何和SQL-Alchemy所支持的后端数据库,通常使用SQLite3来做测试可开发工作,MySQLPostgreSQL 作生产环境。

2.部署需求:创建Nova服务所需数据库及相关授权、服务凭证和API端点

controller端(控制端):

  1. [root@controller ~]# mysql -u root -p123456
  2. MariaDB [(none)]> CREATE DATABASE nova; #创建 nova 数据库
  3. Query OK, row affected (0.00 sec)
  4. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY ''; #对nova数据库授予恰当的访问权限
  5. Query OK, rows affected (0.01 sec)
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '';
  7. Query OK, rows affected (0.00 sec)
  8. MariaDB [(none)]> \q
  9. Bye
  10. [root@controller ~]# source admin-openrc.sh #获得 admin 凭证来获取只有管理员能执行命令的访问权限
  11. 创建服务证书
  12. [root@controller ~]# openstack user create --domain default --password-prompt nova #创建 nova 用户
  13. User Password: #密码为123456
  14. Repeat User Password:
  15. +-----------+----------------------------------+
  16. | Field | Value |
  17. +-----------+----------------------------------+
  18. | domain_id | default |
  19. | enabled | True |
  20. | id | 00a917a5ba494d13b3c48bb51d47384c |
  21. | name | nova |
  22. +-----------+----------------------------------+
  23. [root@controller ~]# openstack role add --project service --user nova admin #添加admin 角色到 nova 用户,命令没有输出
  24. [root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute #创建nova 服务实体
  25. +-------------+----------------------------------+
  26. | Field | Value |
  27. +-------------+----------------------------------+
  28. | description | OpenStack Compute |
  29. | enabled | True |
  30. | id | 9ced96bbfda44296aba0311fbc52f68e |
  31. | name | nova |
  32. | type | compute |
  33. +-------------+----------------------------------+
  34. 创建计算服务API端点
  35. [root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s
  36. +--------------+-----------------------------------------+
  37. | Field | Value |
  38. +--------------+-----------------------------------------+
  39. | enabled | True |
  40. | id | 02b501d9270345fe887165c35c9ee9b2 |
  41. | interface | public |
  42. | region | RegionOne |
  43. | region_id | RegionOne |
  44. | service_id | 9ced96bbfda44296aba0311fbc52f68e |
  45. | service_name | nova |
  46. | service_type | compute |
  47. | url | http://controller:8774/v2/%(tenant_id)s |
  48. +--------------+-----------------------------------------+
  49. [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s
  50. +--------------+-----------------------------------------+
  51. | Field | Value |
  52. +--------------+-----------------------------------------+
  53. | enabled | True |
  54. | id | 886844dc06d84b838e623f6d3939818c |
  55. | interface | internal |
  56. | region | RegionOne |
  57. | region_id | RegionOne |
  58. | service_id | 9ced96bbfda44296aba0311fbc52f68e |
  59. | service_name | nova |
  60. | service_type | compute |
  61. | url | http://controller:8774/v2/%(tenant_id)s |
  62. +--------------+-----------------------------------------+
  63. [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s
  64. +--------------+-----------------------------------------+
  65. | Field | Value |
  66. +--------------+-----------------------------------------+
  67. | enabled | True |
  68. | id | b72dc761e3004e398277d90441ee2cc3 |
  69. | interface | admin |
  70. | region | RegionOne |
  71. | region_id | RegionOne |
  72. | service_id | 9ced96bbfda44296aba0311fbc52f68e |
  73. | service_name | nova |
  74. | service_type | compute |
  75. | url | http://controller:8774/v2/%(tenant_id)s |
  76. +--------------+-----------------------------------------+

3.安装服务

  1. [root@controller ~]# yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient #安装软件包
  2. root@controller neutron]# grep "^[a-z]" -B /etc/nova/nova.conf #编辑/etc/nova/nova.conf文件
  3. [DEFAULT]
  4. rpc_backend = rabbit # #配置 RabbitMQ消息队列访问
  5. auth_strategy = keystone # #配置认证服务访问
  6. my_ip = 192.168.1.101 #配置 my_ip使用控制节点的管理接口的IP地址
  7. network_api_class = nova.network.neutronv2.api.API #启用网络服务支持
  8. security_group_api = neutron
  9. linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
  10. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  11. enabled_apis=osapi_compute,metadata #禁用EC2 API
  12. verbose = True
  13. [database]
  14. connection = mysql://nova:123456@controller/nova #配置数据库访问
  15. [glance]
  16. host = controller #配置镜像服务的位置,域名如果无法解析也可以IP地址
  17. [keystone_authtoken] #配置认证服务访问
  18. auth_uri = http://controller:5000
  19. auth_url = http://controller:35357
  20. auth_plugin = password
  21. project_domain_id = default
  22. user_domain_id = default
  23. project_name = service
  24. username = nova
  25. password =
  26. [neutron] #配置计算使用网络访问参数,启用元数据代理和配置secret
  27. url = http://controller:9696
  28. auth_url = http://controller:35357
  29. auth_plugin = password
  30. project_domain_id = default
  31. user_domain_id = default
  32. region_name = RegionOne
  33. project_name = service
  34. username = neutron
  35. password =
  36. service_metadata_proxy = True #启用元数据代理和配置元数据共享密码
  37. metadata_proxy_shared_secret = 123456 #自定义,与/etc/neutron/metadata_agent.ini文件中一致即可
  38. [oslo_concurrency]
  39. lock_path = /var/lib/nova/tmp #配置锁路径
  40. [oslo_messaging_rabbit] #配置 RabbitMQ消息队列访问
  41. rabbit_host = controller
  42. rabbit_userid = openstack
  43. rabbit_password =
  44. [vnc] #配置VNC代理使用控制节点的管理IP地址
  45. vncserver_listen = $my_ip
  46. vncserver_proxyclient_address = $my_ip
  47. [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova #同步Compute 数据库,忽略告警信息
  48. [root@controller yum.repos.d]# tail /var/log/nova/nova-manage.log
  49. -- ::52.552 INFO migrate.versioning.api [-] -> ...
  50. -- ::52.663 INFO migrate.versioning.api [-] done
  51. -- ::52.664 INFO migrate.versioning.api [-] -> ...
  52. -- ::52.740 INFO migrate.versioning.api [-] done
  53. -- ::52.740 INFO migrate.versioning.api [-] -> ...
  54. -- ::52.931 INFO migrate.versioning.api [-] done
  55. -- ::52.931 INFO migrate.versioning.api [-] -> ...
  56. -- ::53.217 INFO migrate.versioning.api [-] done
  57. -- ::53.218 INFO migrate.versioning.api [-] -> ...
  58. -- ::53.230 INFO migrate.versioning.api [-] done
  59. [root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service #启动 Compute 服务并将其设置为随系统启动
  60. [root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

compute1(计算节点)安装并配置Nova服务:

  1. [root@compute1 ~]# yum install -y openstack-nova-compute sysfsutils
  2. [root@compute1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo #确定计算节点是否支持虚拟机的硬件加速 。如果这个命令返回 1或者更大的值,说明计算节点支持硬件加速,一般不需要进行额外的配置。
  3. 如果这个命令返回``0``,则计算节点不支持硬件加速,必须配置 libvirt使用QEMU而不是使用KVM
  4. [root@compute1 neutron]# grep "^[a-z]" -B /etc/nova/nova.conf #编辑/etc/nova/nova.conf文件
  5. [DEFAULT]
  6. rpc_backend = rabbit #配置RabbitMQ消息队列
  7. auth_strategy = keystone #配置认证服务访问
  8. my_ip = 192.168.1.102 #计算节点上的管理网络接口的IP 地址
  9. network_api_class = nova.network.neutronv2.api.API #启用网络服务支持
  10. security_group_api = neutron
  11. linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
  12. firewall_driver = nova.virt.firewall.NoopFirewallDriver #网络包括防火墙服务,你必须使用nova.virt.firewall.NoopFirewallDriver驱动程序禁用计算机防火墙服务
  13. verbose = True
  14. [glance]
  15. host = controller #配置镜像服务的位置
  16. [keystone_authtoken] #配置认证服务访问
  17. auth_uri = http://controller:5000
  18. auth_url = http://controller:35357
  19. auth_plugin = password
  20. project_domain_id = default
  21. user_domain_id = default
  22. project_name = service
  23. username = nova
  24. password =
  25. [libvirt]
  26. virt_type = kvm
  27. [neutron] #配置计算使用网络访问参数
  28. url = http://controller:9696
  29. auth_url = http://controller:35357
  30. auth_plugin = password
  31. project_domain_id = default
  32. user_domain_id = default
  33. region_name = RegionOne
  34. project_name = service
  35. username = neutron
  36. password =
  37. [oslo_concurrency]
  38. lock_path = /var/lib/nova/tmp #配置锁路径
  39. [oslo_messaging_rabbit] #配置RabbitMQ消息队列
  40. rabbit_host = controller
  41. rabbit_userid = openstack
  42. rabbit_password =
  43. [vnc] #启用并配置远程控制台访问
  44. enabled = True
  45. vncserver_listen = 0.0.0.0
  46. vncserver_proxyclient_address = $my_ip
  47. novncproxy_base_url = http://controller:6080/vnc_auto.html #如果主机无法解析controller主机名,你可以将 controller替换为你控制节点管理网络的IP地址。
  48. [root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service #启动计算服务及其依赖,并将其配置为随系统自动启动
  49. [root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service

验证操作:
controller端(控制端):

  1. [root@controller ~]# source admin-openrc.sh #获得 admin 凭证来获取只有管理员能执行命令的访问权限
  2. [root@controller ~]# nova service-list #列出服务组件,以验证是否成功启动并注册了每个进程 该输出应该显示四个服务组件在控制节点上启用,一个服务组件在计算节点上启用
  3. +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
  4. | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  5. +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
  6. | | nova-scheduler | controller | internal | enabled | up | --04T12::55.000000 | - |
  7. | | nova-conductor | controller | internal | enabled | up | --04T12::55.000000 | - |
  8. | | nova-consoleauth | controller | internal | enabled | up | --04T12::55.000000 | - |
  9. | | nova-cert | controller | internal | enabled | up | --04T12::55.000000 | - |
  10. | | nova-compute | compute1 | nova | enabled | up | --04T12::49.000000 | - |
  11. +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
  12. [root@controller ~]# nova endpoints #列出身份认证服务中的 API 端点来验证身份认证服务的连通性
  13. WARNING: keystone has no endpoint in ! Available endpoints for this service: #忽略输出的警告
  14. +-----------+----------------------------------+
  15. | keystone | Value |
  16. +-----------+----------------------------------+
  17. | id | 00da46788e874f529f67046226c7b0c9 |
  18. | interface | internal |
  19. | region | RegionOne |
  20. | region_id | RegionOne |
  21. | url | http://controller:5000/v2.0 |
  22. +-----------+----------------------------------+
  23. +-----------+----------------------------------+
  24. | keystone | Value |
  25. +-----------+----------------------------------+
  26. | id | 1ee55eac378f4d179bacb4ea3d1850d1 |
  27. | interface | public |
  28. | region | RegionOne |
  29. | region_id | RegionOne |
  30. | url | http://controller:5000/v2.0 |
  31. +-----------+----------------------------------+
  32. +-----------+----------------------------------+
  33. | keystone | Value |
  34. +-----------+----------------------------------+
  35. | id | fab8917d632a4a8c8ccb4290cbd382c6 |
  36. | interface | admin |
  37. | region | RegionOne |
  38. | region_id | RegionOne |
  39. | url | http://controller:35357/v2.0 |
  40. +-----------+----------------------------------+
  41. WARNING: nova has no endpoint in ! Available endpoints for this service:
  42. +-----------+------------------------------------------------------------+
  43. | nova | Value |
  44. +-----------+------------------------------------------------------------+
  45. | id | 02b501d9270345fe887165c35c9ee9b2 |
  46. | interface | public |
  47. | region | RegionOne |
  48. | region_id | RegionOne |
  49. | url | http://controller:8774/v2/839cdfc946e1491c8004e3b732d17f9a |
  50. +-----------+------------------------------------------------------------+
  51. +-----------+------------------------------------------------------------+
  52. | nova | Value |
  53. +-----------+------------------------------------------------------------+
  54. | id | 886844dc06d84b838e623f6d3939818c |
  55. | interface | internal |
  56. | region | RegionOne |
  57. | region_id | RegionOne |
  58. | url | http://controller:8774/v2/839cdfc946e1491c8004e3b732d17f9a |
  59. +-----------+------------------------------------------------------------+
  60. +-----------+------------------------------------------------------------+
  61. | nova | Value |
  62. +-----------+------------------------------------------------------------+
  63. | id | b72dc761e3004e398277d90441ee2cc3 |
  64. | interface | admin |
  65. | region | RegionOne |
  66. | region_id | RegionOne |
  67. | url | http://controller:8774/v2/839cdfc946e1491c8004e3b732d17f9a |
  68. +-----------+------------------------------------------------------------+
  69. WARNING: glance has no endpoint in ! Available endpoints for this service:
  70. +-----------+----------------------------------+
  71. | glance | Value |
  72. +-----------+----------------------------------+
  73. | id | 589466fdddf447b9b7e273954c2b7987 |
  74. | interface | public |
  75. | region | RegionOne |
  76. | region_id | RegionOne |
  77. | url | http://controller:9292 |
  78. +-----------+----------------------------------+
  79. +-----------+----------------------------------+
  80. | glance | Value |
  81. +-----------+----------------------------------+
  82. | id | f67a5c559caf4580aee84304d1a2f37d |
  83. | interface | internal |
  84. | region | RegionOne |
  85. | region_id | RegionOne |
  86. | url | http://controller:9292 |
  87. +-----------+----------------------------------+
  88. +-----------+----------------------------------+
  89. | glance | Value |
  90. +-----------+----------------------------------+
  91. | id | fb54cd8ff23b4ea0872f1a5db7182d8e |
  92. | interface | admin |
  93. | region | RegionOne |
  94. | region_id | RegionOne |
  95. | url | http://controller:9292 |
  96. +-----------+----------------------------------+
  97. [root@controller ~]# nova image-list #列出镜像服务目录的镜像,验证镜像服务的连通性
  98. +--------------------------------------+--------+--------+--------+
  99. | ID | Name | Status | Server |
  100. +--------------------------------------+--------+--------+--------+
  101. | 936bce27-085b-4d79-8cce-68cff70d7abd | cirros | ACTIVE | |
  102. +--------------------------------------+--------+--------+--------+

八、安装和配置网络服务(neutron)

OpenStack网络(neutron)管理您OpenStack环境中虚拟网络基础设施(VNI)所有网络方面和物理网络基础设施(PNI)的接入层方面。OpenStack网络允许租户创建包括像 firewall,load balancer和 virtual private network (VPN)等这样服务的高级网络虚拟拓扑。

1.服务简述

  1. OpenStack Networkingneutron),允许创建、插入接口设备,这些设备由其他的OpenStack服务管理。插件式的实现可以容纳不同的网络设备和软件,为OpenStack架构与部署提供了灵活性。
  2. 它包含下列组件:
  3. neutron-server
  4.   接收和路由API请求到合适的OpenStack网络插件,以达到预想的目的。
  5. OpenStack网络插件和代理
  6.   插拔端口,创建网络和子网,以及提供IP地址,这些插件和代理依赖于供应商和技术而不同,OpenStack网络基于插件和代理为Cisco 虚拟和物理交换机、NEC OpenFlow产品,Open vSwitch,Linux bridging以及VMware NSX 产品穿线搭桥。
  7.   常见的代理L3(3层),DHCP(动态主机IP地址),以及插件代理。
  8. 消息队列
  9.   大多数的OpenStack Networking安装都会用到,用于在neutron-server和各种各样的代理进程间路由信息。也为某些特定的插件扮演数据库的角色,以存储网络状态
  10. OpenStack网络主要和OpenStack计算交互,以提供网络连接到它的实例。

2.部署需求:创建neutron服务数据库,服务凭证和API端点

  1. [root@controller ~]# mysql -u root -p123456
  2. MariaDB [(none)]> CREATE DATABASE neutron; #创建neutron数据库
  3. Query OK, row affected (0.00 sec)
  4. MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY ''; #对neutron数据库授予恰当的访问权限
  5. Query OK, rows affected (0.03 sec)
  6. MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '';
  7. Query OK, rows affected (0.00 sec)
  8. MariaDB [(none)]> \q
  9. Bye
  10. [root@controller ~]# source admin-openrc.sh #获得 admin 凭证来获取只有管理员能执行命令的访问权限
  11. 创建服务证书
  12. [root@controller ~]# openstack user create --domain default --password-prompt neutron #创建neutron用户
  13. User Password: #密码为123456
  14. Repeat User Password:
  15. +-----------+----------------------------------+
  16. | Field | Value |
  17. +-----------+----------------------------------+
  18. | domain_id | default |
  19. | enabled | True |
  20. | id | c704bcba775b43b4b9b12a06f60af725 |
  21. | name | neutron |
  22. +-----------+----------------------------------+
  23. [root@controller ~]# openstack role add --project service --user neutron admin #添加admin 角色到neutron 用户
  24. [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network #创建neutron服务实体
  25. +-------------+----------------------------------+
  26. | Field | Value |
  27. +-------------+----------------------------------+
  28. | description | OpenStack Networking |
  29. | enabled | True |
  30. | id | 71ddd68d6f6c463f8656274270650d68 |
  31. | name | neutron |
  32. | type | network |
  33. +-------------+----------------------------------+
  34. [root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 #创建网络服务API端点
  35. +--------------+----------------------------------+
  36. | Field | Value |
  37. +--------------+----------------------------------+
  38. | enabled | True |
  39. | id | 7761b18170534542af7a614f53025110 |
  40. | interface | public |
  41. | region | RegionOne |
  42. | region_id | RegionOne |
  43. | service_id | 71ddd68d6f6c463f8656274270650d68 |
  44. | service_name | neutron |
  45. | service_type | network |
  46. | url | http://controller:9696 |
  47. +--------------+----------------------------------+
  48. [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
  49. +--------------+----------------------------------+
  50. | Field | Value |
  51. +--------------+----------------------------------+
  52. | enabled | True |
  53. | id | 1e92ad2a17854c678d37079dd9a9e297 |
  54. | interface | internal |
  55. | region | RegionOne |
  56. | region_id | RegionOne |
  57. | service_id | 71ddd68d6f6c463f8656274270650d68 |
  58. | service_name | neutron |
  59. | service_type | network |
  60. | url | http://controller:9696 |
  61. +--------------+----------------------------------+
  62. [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
  63. +--------------+----------------------------------+
  64. | Field | Value |
  65. +--------------+----------------------------------+
  66. | enabled | True |
  67. | id | 077b1b1213a84699b6c5fda239db148d |
  68. | interface | admin |
  69. | region | RegionOne |
  70. | region_id | RegionOne |
  71. | service_id | 71ddd68d6f6c463f8656274270650d68 |
  72. | service_name | neutron |
  73. | service_type | network |
  74. | url | http://controller:9696 |
  75. +--------------+----------------------------------+

3.配置服务(这里使用网络服务选项2)

controller端(控制端):

  1.  
  1. [root@controller ~]#yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset
  1. [root@controller ~]# grep "^[a-z]" -B 1 /etc/neutron/neutron.conf #编辑/etc/neutron/neutron.conf文件
  2. [DEFAULT]
  3. core_plugin = ml2 #启用Layer 2 (ML2)插件模块,路由服务和重叠的IP地址
  4. service_plugins = router
  5. allow_overlapping_ips = True
  6. rpc_backend = rabbit #配置 "RabbitMQ" 消息队列访问
  7. auth_strategy = keystone #配置认证服务访问
  8. notify_nova_on_port_status_changes = True #配置网络以能够反映计算网络拓扑变化
  9. notify_nova_on_port_data_changes = True
  10. nova_url = http://controller:8774/v2
  11. verbose = True #启用详细日志
  12. [keystone_authtoken] #配置认证服务访问,在 [keystone_authtoken] 中注释或者删除其他选项。
  13. uth_uri = http://controller:5000
  14. auth_url = http://controller:35357
  15. auth_plugin = password
  16. project_domain_id = default
  17. user_domain_id = default
  18. project_name = service
  19. username = neutron
  20. password = 123456
  21. [database]
  22. connection = mysql://neutron:123456@controller/neutron #配置数据库访问
  23. [nova] #配置网络以能够反映计算网络拓扑变化
  24. auth_url = http://controller:35357
  25. auth_plugin = password
  26. project_domain_id = default
  27. user_domain_id = default
  28. region_name = RegionOne
  29. project_name = service
  30. username = nova
  31. password = 123456
  32. [oslo_concurrency]
  33. lock_path = /var/lib/neutron/tmp #配置锁路径
  34. [oslo_messaging_rabbit] #配置 "RabbitMQ"消息队列访问
  35. rabbit_host = controller
  36. rabbit_userid = openstack
  37. rabbit_password = 123456
  38. [root@controller ~]# grep "^[a-z]" -B 1 /etc/neutron/plugins/ml2/ml2_conf.ini #编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件
  39. [ml2]
  40. type_drivers = flat,vlan,vxlan #启用flat,VLAN和VXLAN网络
  41. tenant_network_types = vxlan #启用VXLAN项目(私有)网络 Linux桥接代理只支持VXLAN网络。
  42. mechanism_drivers = linuxbridge,l2population #启用Linux 桥接和layer-2 population mechanisms
  43. extension_drivers = port_security #启用端口安全扩展驱动
  44. [ml2_type_flat]
  45. flat_networks = public #配置公共flat提供网络
  46. [ml2_type_vxlan]
  47. vni_ranges = 1:1000 #配置VXLAN网络标识范围与私有网络不同
  48. [securitygroup]
  49. enable_ipset = True #启用 ipset 增加安全组的方便性
  50. [root@controller ~]# grep "^[a-z]" -B 1 /etc/neutron/plugins/ml2/linuxbridge_agent.ini #编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件
  51. [linux_bridge]
  52. physical_interface_mappings = public:ens32 #映射公共虚拟网络到公共物理网络接口
  53. [vxlan] #启用VXLAN覆盖网络,配置处理覆盖网络和启用layer-2 的物理网络接口的IP地址
  54. enable_vxlan = True
  55. local_ip = 192.168.1.101
  56. l2_population = True
  57. [agent]
  58. prevent_arp_spoofing = True #启用ARP欺骗防护
  59. [securitygroup] #启用安全组并配置 Linux 桥接 iptables 防火墙驱动
  60. enable_security_group = True
  61. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  62. [root@controller ~]# grep "^[a-z]" -B 1 /etc/neutron/l3_agent.ini #编辑/etc/neutron/l3_agent.ini 文件
  63. [DEFAULT] #配置Linux桥接网络驱动和外部网络桥接
  64. interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  65. external_network_bridge = #故意缺少值,这样就可以在一个代理上启用多个外部网络
  66. verbose = True #启用详细日志
  67. [root@controller ~]# grep "^[a-z]" -B 1 /etc/neutron/dhcp_agent.ini #编辑/etc/neutron/dhcp_agent.ini 文件
  68. [DEFAULT] #配置Linux桥接网卡驱动,Dnsmasq DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络访问元数据
  69. interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  70. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  71. enable_isolated_metadata = True
  72. verbose = True
  73. dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf #启用 dnsmasq 配置文件
  74. [root@controller ~]# grep "^[a-z]" -B 1 /etc/neutron/dnsmasq-neutron.conf #编辑创建并/etc/neutron/dnsmasq-neutron.conf 文件
  75. dhcp-option-force=26,1450
  76. [root@controller ~]# grep "^[a-z]" -B 1 /etc/neutron/metadata_agent.ini
  77. [DEFAULT] #配置访问参数
  78. auth_uri = http://controller:5000
  79. auth_url = http://controller:35357
  80. auth_region = RegionOne
  81. auth_plugin = password
  82. project_domain_id = default
  83. user_domain_id = default
  84. project_name = service
  85. username = neutron
  86. password = 123456
  87. nova_metadata_ip = controller #配置元数据主机
  88. metadata_proxy_shared_secret = 123456 #配置元数据代理共享密码,自定义
  89. verbose = True
  90. admin_tenant_name = %SERVICE_TENANT_NAME%
  91. admin_user = %SERVICE_USER%
  92. admin_password = %SERVICE_PASSWORD%
  93. [root@controller ~]#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini #网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini。
  94. [root@controller ~]#su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron #同步数据库
  95. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  96. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  97. Running upgrade for neutron ...
  98. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  99. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  100. INFO [alembic.runtime.migration] Running upgrade -> juno, juno_initial
  101. INFO [alembic.runtime.migration] Running upgrade juno -> 44621190bc02, add_uniqueconstraint_ipavailability_ranges
  102. INFO [alembic.runtime.migration] Running upgrade 44621190bc02 -> 1f71e54a85e7, ml2_network_segments models change for multi-segment network.
  103. INFO [alembic.runtime.migration] Running upgrade 1f71e54a85e7 -> 408cfbf6923c, remove ryu plugin
  104. INFO [alembic.runtime.migration] Running upgrade 408cfbf6923c -> 28c0ffb8ebbd, remove mlnx plugin
  105. INFO [alembic.runtime.migration] Running upgrade 28c0ffb8ebbd -> 57086602ca0a, scrap_nsx_adv_svcs_models
  106. INFO [alembic.runtime.migration] Running upgrade 57086602ca0a -> 38495dc99731, ml2_tunnel_endpoints_table
  107. INFO [alembic.runtime.migration] Running upgrade 38495dc99731 -> 4dbe243cd84d, nsxv
  108. INFO [alembic.runtime.migration] Running upgrade 4dbe243cd84d -> 41662e32bce2, L3 DVR SNAT mapping
  109. INFO [alembic.runtime.migration] Running upgrade 41662e32bce2 -> 2a1ee2fb59e0, Add mac_address unique constraint
  110. INFO [alembic.runtime.migration] Running upgrade 2a1ee2fb59e0 -> 26b54cf9024d, Add index on allocated
  111. INFO [alembic.runtime.migration] Running upgrade 26b54cf9024d -> 14be42f3d0a5, Add default security group table
  112. INFO [alembic.runtime.migration] Running upgrade 14be42f3d0a5 -> 16cdf118d31d, extra_dhcp_options IPv6 support
  113. INFO [alembic.runtime.migration] Running upgrade 16cdf118d31d -> 43763a9618fd, add mtu attributes to network
  114. INFO [alembic.runtime.migration] Running upgrade 43763a9618fd -> bebba223288, Add vlan transparent property to network
  115. INFO [alembic.runtime.migration] Running upgrade bebba223288 -> 4119216b7365, Add index on tenant_id column
  116. INFO [alembic.runtime.migration] Running upgrade 4119216b7365 -> 2d2a8a565438, ML2 hierarchical binding
  117. INFO [alembic.runtime.migration] Running upgrade 2d2a8a565438 -> 2b801560a332, Remove Hyper-V Neutron Plugin
  118. INFO [alembic.runtime.migration] Running upgrade 2b801560a332 -> 57dd745253a6, nuage_kilo_migrate
  119. INFO [alembic.runtime.migration] Running upgrade 57dd745253a6 -> f15b1fb526dd, Cascade Floating IP Floating Port deletion
  120. INFO [alembic.runtime.migration] Running upgrade f15b1fb526dd -> 341ee8a4ccb5, sync with cisco repo
  121. INFO [alembic.runtime.migration] Running upgrade 341ee8a4ccb5 -> 35a0f3365720, add port-security in ml2
  122. INFO [alembic.runtime.migration] Running upgrade 35a0f3365720 -> 1955efc66455, weight_scheduler
  123. INFO [alembic.runtime.migration] Running upgrade 1955efc66455 -> 51c54792158e, Initial operations for subnetpools
  124. INFO [alembic.runtime.migration] Running upgrade 51c54792158e -> 589f9237ca0e, Cisco N1kv ML2 driver tables
  125. INFO [alembic.runtime.migration] Running upgrade 589f9237ca0e -> 20b99fd19d4f, Cisco UCS Manager Mechanism Driver
  126. INFO [alembic.runtime.migration] Running upgrade 20b99fd19d4f -> 034883111f, Remove allow_overlap from subnetpools
  127. INFO [alembic.runtime.migration] Running upgrade 034883111f -> 268fb5e99aa2, Initial operations in support of subnet allocation from a pool
  128. INFO [alembic.runtime.migration] Running upgrade 268fb5e99aa2 -> 28a09af858a8, Initial operations to support basic quotas on prefix space in a subnet pool
  129. INFO [alembic.runtime.migration] Running upgrade 28a09af858a8 -> 20c469a5f920, add index for port
  130. INFO [alembic.runtime.migration] Running upgrade 20c469a5f920 -> kilo, kilo
  131. INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py
  132. INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam
  133. INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes
  134. INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework
  135. INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac
  136. INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule.
  137. INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac
  138. INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables
  139. INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal
  140. INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys
  141. INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver
  142. INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables
  143. INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage
  144. INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash
  145. INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers
  146. INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool
  147. INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
  148. INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
  149. INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
  150. OK
  151. [root@controller ~]#systemctl restart openstack-nova-api.service #重启计算API 服务
  152. #启动网络服务并配置他们开机自启动(对所有网络选项)
  153. [root@controller ~]#systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  154. [root@controller ~]#systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
  155. 对网络选项2,同样也启用并启动layer-3服务:
  156. [root@controller ~]#systemctl enable neutron-l3-agent.service
  157. [root@controller ~]#systemctl start neutron-l3-agent.service

 compute1(计算节点):

  1. [root@compute1 ~]# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
  2. 配置Networking通用组件,Networking 通用组件的配置包括认证机制、消息队列和插件。
  3. [root@compute1 ]# grep "^[a-z]" -B /etc/neutron/neutron.conf
  4. [DEFAULT]
  5. rpc_backend = rabbit #配置RabbitMQ消息队列访问
  6. auth_strategy = keystone #配置认证服务访问 在 [keystone_authtoken] 中注释或者删除其他选项。
  7. verbose = True
  8. [keystone_authtoken] #配置认证服务访问
  9. auth_uri = http://controller:5000
  10. auth_url = http://controller:35357
  11. auth_plugin = password
  12. project_domain_id = default
  13. user_domain_id = default
  14. project_name = service
  15. username = neutron
  16. password =
  17. [oslo_concurrency]
  18. lock_path = /var/lib/neutron/tmp #配置锁路径
  19. [oslo_messaging_rabbit] # #配置RabbitMQ消息队列访问
  20. rabbit_host = controller
  21. rabbit_userid = openstack
  22. rabbit_password =
  23. 配置Linux 桥接代理
  24. [root@compute1 ]# grep "^[a-z]" -B /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  25. [linux_bridge]
  26. physical_interface_mappings = public:eth0 #映射公共虚拟网络到公共物理网络接口
  27. [vxlan] #启用VXLAN覆盖网络,配置处理覆盖网络和启用layer-2 的物理网络接口的IP地址
  28. enable_vxlan = True
  29. local_ip = 192.168.1.102
  30. l2_population = True
  31. [agent]
  32. prevent_arp_spoofing = True #启用ARP欺骗防护
  33. [securitygroup] #启用安全组并配置 Linux 桥接 iptables 防火墙驱动
  34. enable_security_group = True
  35. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  36. [root@compute1 ~]#systemctl restart openstack-nova-compute.service # 重启计算服务
  37. [root@compute1 ~]#systemctl enable neutron-linuxbridge-agent.service #启动Linux桥接代理并配置它开机自启动
  38. [root@compute1 ~]#systemctl start neutron-linuxbridge-agent.service

验证操作:
controller端(控制端):

  1. [root@controller ~]# source admin-openrc.sh #获得 admin 凭证来获取只有管理员能执行命令的访问权限
  2. [root@controller ~]# neutron ext-list #列出加载的扩展,对neutron-server进程是否启动正常进行验证
  3. +-----------------------+-----------------------------------------------+
  4. | alias | name |
  5. +-----------------------+-----------------------------------------------+
  6. | dns-integration | DNS Integration |
  7. | ext-gw-mode | Neutron L3 Configurable external gateway mode |
  8. | binding | Port Binding |
  9. | agent | agent |
  10. | subnet_allocation | Subnet Allocation |
  11. | l3_agent_scheduler | L3 Agent Scheduler |
  12. | external-net | Neutron external network |
  13. | flavors | Neutron Service Flavors |
  14. | net-mtu | Network MTU |
  15. | quotas | Quota management support |
  16. | l3-ha | HA Router extension |
  17. | provider | Provider Network |
  18. | multi-provider | Multi Provider Network |
  19. | extraroute | Neutron Extra Route |
  20. | router | Neutron L3 Router |
  21. | extra_dhcp_opt | Neutron Extra DHCP opts |
  22. | security-group | security-group |
  23. | dhcp_agent_scheduler | DHCP Agent Scheduler |
  24. | rbac-policies | RBAC Policies |
  25. | port-security | Port Security |
  26. | allowed-address-pairs | Allowed Address Pairs |
  27. | dvr | Distributed Virtual Router |
  28. +-----------------------+-----------------------------------------------+
  29. [root@controller ~]# neutron agent-list #列出代理以验证启动 neutron 代理是否成功 ,该输出应该显示在控制节点上有四个代理,在每个计算节点上有一个代理
  30. +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
  31. | id | agent_type | host | alive | admin_state_up | binary |
  32. +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
  33. | 186d2121-3fe5-49b6-b462-fe404afb159e | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
  34. | 73aa6284-ac78--80df-2334bcd71736 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
  35. | 7424c397-481e-49c8-a8df-71d68e7c3b29 | L3 agent | controller | :-) | True | neutron-l3-agent |
  36. | 8d555ed3--4af2--7e53145a9b03 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
  37. | d6f66209---87e7-275dec0e792a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
  38. +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

九、启动一个实例

创建虚拟网络

在创建私有项目网络前,必须创建创建公共网络(在启动实例前,必须创建必要的虚拟网络设施。对网络选择1,实例通过layer-2(桥接/交换)使用连接到物理网络设施的公共提供虚拟网络。这个网络包括一个为实例提供IP地址的DHCP服务。admin或者其他权限用户必须创建这个网络,因为它直接连接到物理网络设施。)

  1. 创建公共网络
  2. [root@controller ~]# source admin-openrc.sh #加载 admin 凭证来获取管理员能执行的命令访问权限
  3. [root@controller ~]# neutron net-create public --shared --provider:physical_network public --provider:network_type flat #创建网络
  4. Created a new network:
  5. +---------------------------+--------------------------------------+
  6. | Field | Value |
  7. +---------------------------+--------------------------------------+
  8. | admin_state_up | True |
  9. | id | 5fc60cce---b9e2-c768af2ea302 |
  10. | mtu | |
  11. | name | public |
  12. | port_security_enabled | True |
  13. | provider:network_type | flat |
  14. | provider:physical_network | public |
  15. | provider:segmentation_id | |
  16. | router:external | False |
  17. | shared | True |
  18. | status | ACTIVE |
  19. | subnets | |
  20. | tenant_id | e5f65d198e594c9f8a8db29a6a9d01a7 |
  21. +---------------------------+--------------------------------------+
  22. [root@controller ~]# neutron subnet-create public 192.168.1.0/ --name public --allocation-pool start=192.168.1.220,end=192.168.1.250 --dns-nameserver 114.114.114.114 --gateway 192.168.1.1 #在网络上创建一个子网
  23. Created a new subnet:
  24. +-------------------+----------------------------------------------------+
  25. | Field | Value |
  26. +-------------------+----------------------------------------------------+
  27. | allocation_pools | {"start": "192.168.1.220", "end": "192.168.1.250"} |
  28. | cidr | 192.168.1.0/ |
  29. | dns_nameservers | 192.168.1.1 |
  30. | enable_dhcp | True |
  31. | gateway_ip | 192.168.1.1 |
  32. | host_routes | |
  33. | id | ac92ba15-daef-4bc3-a353-ed1325c85844 |
  34. | ip_version | |
  35. | ipv6_address_mode | |
  36. | ipv6_ra_mode | |
  37. | name | public |
  38. | network_id | 5fc60cce---b9e2-c768af2ea302 |
  39. | subnetpool_id | |
  40. | tenant_id | e5f65d198e594c9f8a8db29a6a9d01a7 |
  41. +-------------------+----------------------------------------------------+
  42. 创建私有项目网络
  43. [root@controller ~]# source demo-openrc.sh #加载 demo 凭证来获取管理员能执行的命令访问权限
  44. [root@controller ~]# neutron net-create private #创建网络 非特权用户一般不能在这个命令制定更多参数
  45. Created a new network:
  46. +-----------------------+--------------------------------------+
  47. | Field | Value |
  48. +-----------------------+--------------------------------------+
  49. | admin_state_up | True |
  50. | id | ce8a6c38-5a84-47c0-b058-9bdd8b67e179 |
  51. | mtu | |
  52. | name | private |
  53. | port_security_enabled | True |
  54. | router:external | False |
  55. | shared | False |
  56. | status | ACTIVE |
  57. | subnets | |
  58. | tenant_id | a152b2b891a147dfa3068d66311ad0c3 |
  59. +-----------------------+--------------------------------------+
  60. [root@controller ~]# neutron subnet-create private172.16.1.0/ --name private --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 #在网络上创建一个子网
  61. Created a new subnet:
  62. +-------------------+------------------------------------------------+
  63. | Field | Value |
  64. +-------------------+------------------------------------------------+
  65. | allocation_pools | {"start": "172.16.1.2", "end": "172.16.1.254"} |
  66. | cidr | 172.16.1.0/ |
  67. | dns_nameservers | 114.114.114.114 |
  68. | enable_dhcp | True |
  69. | gateway_ip | 172.16.1.1 |
  70. | host_routes | |
  71. | id | 91f26704-6ead-4d73-870e-115dd8377998 |
  72. | ip_version | |
  73. | ipv6_address_mode | |
  74. | ipv6_ra_mode | |
  75. | name | private |
  76. | network_id | ce8a6c38-5a84-47c0-b058-9bdd8b67e179 |
  77. | subnetpool_id | |
  78. | tenant_id | a152b2b891a147dfa3068d66311ad0c3 |
  79. +-------------------+------------------------------------------------+
  80. 创建路由器
  81. [root@controller ~]# source admin-openrc.sh #获得 admin 凭证来获取只有管理员能执行的命令的访问权限
  82. [root@controller ~]# neutron net-update public --router:external #添加router: external到 public 网络
  83. Updated network: public
  84. [root@controller ~]# source demo-openrc.sh #加载 demo 凭证获得用户能执行的命令访问权限
  85. [root@controller ~]# neutron router-create router #创建路由
  86. Created a new router:
  87. +-----------------------+--------------------------------------+
  88. | Field | Value |
  89. +-----------------------+--------------------------------------+
  90. | admin_state_up | True |
  91. | external_gateway_info | |
  92. | id | 649c8cfc-e117--b55d-cd9214792ae3 |
  93. | name | router |
  94. | routes | |
  95. | status | ACTIVE |
  96. | tenant_id | a152b2b891a147dfa3068d66311ad0c3 |
  97. +-----------------------+--------------------------------------+
  98. [root@controller ~]# neutron router-interface-add router private #在路由器添加一个私网子网接口
  99. Added interface-b387--81b8-a2cbeb5b6b4d to router router.
  100. [root@controller ~]# neutron router-gateway-set router public #在路由器上设置公共网络的网关
  101. Set gateway for router router

验证操作

  1. [root@controller ~]# source admin-openrc.sh #加载 admin 凭证来获取管理员能执行的命令访问权限
  2. [root@controller ~]# ip netns #列出网络命名空间。你应该可以看到一个qrouter命名空间和两个qdhcp命名空间。
  3. qrouter-649c8cfc-e117--b55d-cd9214792ae3 (id: )
  4. qdhcp-ce8a6c38-5a84-47c0-b058-9bdd8b67e179 (id: )
  5. qdhcp-5fc60cce---b9e2-c768af2ea302 (id: )
  6. [root@controller ~]# neutron router-port-list router #列出路由器上的端口来确定公网的网关IP 地址
  7. +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
  8. | id | name | mac_address | fixed_ips |
  9. +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
  10. | -b387--81b8-a2cbeb5b6b4d | | fa::3e:a2:c5: | {"subnet_id": "91f26704-6ead-4d73-870e-115dd8377998", "ip_address": "172.16.1.1"} |
  11. | d3d1023b-5cfc-473b-ace9-84e25a6cfdba | | fa::3e:::d1 | {"subnet_id": "ac92ba15-daef-4bc3-a353-ed1325c85844", "ip_address": "192.168.1.201"} |
  12. +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
  13. [root@controller ~]# ping -c 192.168.1.221 #从控制节点或任意公共物理网络上的主机Ping这个IP地址
  14. PING 192.168.1.201 (192.168.1.221) () bytes of data.
  15. bytes from 192.168.1.221: icmp_seq= ttl= time=0.293 ms
  16. bytes from 192.168.1.221: icmp_seq= ttl= time=0.066 ms
  17. bytes from 192.168.1.221: icmp_seq= ttl= time=0.120 ms
  18. bytes from 192.168.1.221: icmp_seq= ttl= time=0.065 ms
  19. --- 192.168.1.221 ping statistics ---
  20. packets transmitted, received, % packet loss, time 3000ms
  21. rtt min/avg/max/mdev = 0.065/0.136/0.293/0.093 ms
  22. 生成一个密钥对
  23. [root@controller ~]# source demo-openrc.sh
  24. [root@controller ~]# ssh-keygen -q -N "" #可以跳过执行 ssh-keygen 命令而使用已存在的公钥
  25. Enter file in which to save the key (/root/.ssh/id_rsa):
  26. [root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey #生成和添加秘钥对
  27. [root@controller ~]# nova keypair-list #验证公钥的添加
  28. +-------+-------------------------------------------------+
  29. | Name | Fingerprint |
  30. +-------+-------------------------------------------------+
  31. | mykey | ::::2d:e3::e5:a0::ea::8e:1b:a8:ae |
  32. +-------+-------------------------------------------------+
  33. 添加安全组规则(默认情况下, default安全组适用于所有实例并且包括拒绝远程访问实例的防火墙规则。推荐至少允许ICMP (ping) 和安全shell(SSH))
  34. [root@controller ~]# nova secgroup-add-rule default icmp - - 0.0.0.0/ #允许 ICMP (ping)
  35. +-------------+-----------+---------+-----------+--------------+
  36. | IP Protocol | From Port | To Port | IP Range | Source Group |
  37. +-------------+-----------+---------+-----------+--------------+
  38. | icmp | - | - | 0.0.0.0/ | |
  39. +-------------+-----------+---------+-----------+--------------+
  40. [root@controller ~]# nova secgroup-add-rule default tcp 0.0.0.0/ #允许安全 shell (SSH) 的访问
  41. +-------------+-----------+---------+-----------+--------------+
  42. | IP Protocol | From Port | To Port | IP Range | Source Group |
  43. +-------------+-----------+---------+-----------+--------------+
  44. | tcp | | | 0.0.0.0/ | |
  45. +-------------+-----------+---------+-----------+--------------+

  1. #一个实例指定了虚拟机资源的大致分配,包括处理器、内存和存储
    [root@controller ~]# source demo-openrc.sh
    [root@controller ~]# nova flavor-list #列出可用类型,实验使用m1.tiny方案。
  1. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  2. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  3. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  4. | | m1.tiny | | | | | | 1.0 | True |
  5. | | m1.small | | | | | | 1.0 | True |
  6. | | m1.medium | | | | | | 1.0 | True |
  7. | | m1.large | | | | | | 1.0 | True |
  8. | | m1.xlarge | | | | | | 1.0 | True |
  9. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  10. [root@controller ~]# nova image-list
    +--------------------------------------+--------+--------+--------+
  11. | ID | Name | Status | Server |
  12. +--------------------------------------+--------+--------+--------+
  13. | 2df37e06-ed46--b5d0-f643640b6a52 | cirros | ACTIVE | |
  14. +--------------------------------------+--------+--------+--------+
  15. [root@controller ~]# neutron net-list
    +--------------------------------------+---------+-----------------------------------------------------+
  16. | id | name | subnets |
  17. +--------------------------------------+---------+-----------------------------------------------------+
  18. | 5fc60cce---b9e2-c768af2ea302 | public | ac92ba15-daef-4bc3-a353-ed1325c85844 192.168.1.0/ |
  19. | ce8a6c38-5a84-47c0-b058-9bdd8b67e179 | private | 91f26704-6ead-4d73-870e-115dd8377998 172.16.1.0/ |
  20. +--------------------------------------+---------+-----------------------------------------------------+
  21. [root@controller ~]# nova secgroup-lsit
    +--------------------------------------+---------+------------------------+
  22. | Id | Name | Description |
  23. +--------------------------------------+---------+------------------------+
  24. | 0771996c--4ce0-b6c6-8a890a326295 | default | Default security group |
  25. +--------------------------------------+---------+------------------------+
  26. [root@controller ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=ce8a6c38-5a84-47c0-b058-9bdd8b67e179 --security-group default --key-name mykey private-instance #启动实例
  27. +--------------------------------------+-----------------------------------------------+
  28. | Property | Value |
  29. +--------------------------------------+-----------------------------------------------+
  30. | OS-DCF:diskConfig | MANUAL |
  31. | OS-EXT-AZ:availability_zone | |
  32. | OS-EXT-STS:power_state | |
  33. | OS-EXT-STS:task_state | scheduling |
  34. | OS-EXT-STS:vm_state | building |
  35. | OS-SRV-USG:launched_at | - |
  36. | OS-SRV-USG:terminated_at | - |
  37. | accessIPv4 | |
  38. | accessIPv6 | |
  39. | adminPass | VLYaSAvPAE54 |
  40. | config_drive | |
  41. | created | --05T12::27Z |
  42. | flavor | m1.tiny () |
  43. | hostId | |
  44. | id | de88100a-47f1-4be5-b54d-e14d828e1150 |
  45. | image | cirros (2df37e06-ed46--b5d0-f643640b6a52) |
  46. | key_name | mykey |
  47. | metadata | {} |
  48. | name | private-instance |
  49. | os-extended-volumes:volumes_attached | [] |
  50. | progress | |
  51. | security_groups | default |
  52. | status | BUILD |
  53. | tenant_id | a152b2b891a147dfa3068d66311ad0c3 |
  54. | updated | --05T12::27Z |
  55. | user_id | 182ee839b7584748aedb1cbda6d55ce2 |
  56. +--------------------------------------+-----------------------------------------------+
  57. [root@controller ~]#nova list #检查实例的状态
    +--------------------------------------+------------------+--------+------------+-------------+--------------------+
  58. | ID | Name | Status | Task State | Power State | Networks |
  59. +--------------------------------------+------------------+--------+------------+-------------+--------------------+
  60. | de88100a-47f1-4be5-b54d-e14d828e1150 | private-instance | ACTIVE | - | Running | private=172.16.1.3 |
  61. +--------------------------------------+------------------+--------+------------+-------------+--------------------+
  62. [root@controller ~]# nova get-vnc-console private-instance novnc #获取实例的 Virtual Network Computing (VNC) 会话URL并从web浏览器访问它
    +-------+---------------------------------------------------------------------------------+
  63. | Type | Url |
  64. +-------+---------------------------------------------------------------------------------+
  65. | novnc | http://controller:6080/vnc_auto.html?token=ffec3792-a83a-4c2e-a138-bac3f8c7595d |
  66. +-------+---------------------------------------------------------------------------------+

访问url:http://controller:6080/vnc_auto.html?token=ffec3792-a83a-4c2e-a138-bac3f8c7595d  #浏览器需要可以解析域名或者直接输入IP

#默认密码是 cirros用户是cubswin:)

十、添加仪表盘(dashboard)

OpenStack Dashboard为人所知是一个web接口,使得云管理员和用户可以管理不同的OpenStack资源和服务。仪表盘使得通过OpenStack API与OpenStack计算云控制器进行基于web的交互成为可能。Horizon 允许自定义仪表板的商标。Horizon 提供了一套内核类和可重复使用的模板及工具。

安装和配置

  1. [root@controller ~]# yum install openstack-dashboard -y
  2. [root@controller ~]# vim /etc/openstack-dashboard/local_settings #编辑文件 /etc/openstack-dashboard/local_settings
  3. OPENSTACK_HOST = "controller" #在 controller 节点上配置仪表盘以使用 OpenStack 服务
  4. ALLOWED_HOSTS = ['*', ] #允许所有主机访问仪表板
  5. CACHES = { #配置 memcached 会话存储服务,并将其他的会话存储服务配置注释。
  6. 'default': {
  7. 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
  8. 'LOCATION': 'controller:11211',
  9. }
  10. }
  11. OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #为通过仪表盘创建的用户配置默认的 user 角色
  12. OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #启用multi-domain model
  13. OPENSTACK_API_VERSIONS = { #配置服务API版本,这样你就可以通过Keystone V3 API来登录dashboard
  14. "identity": ,
  15. "volume": ,
  16. }
  17. TIME_ZONE = "Asia/Shanghai" #配置时区
  18. ===================================================
  19. 如果选择网络选项1,禁用支持3层网络服务,网络选项2默认即可:
  20. OPENSTACK_NEUTRON_NETWORK = {
  21. ...
  22. 'enable_router': False,
  23. 'enable_quotas': False,
  24. 'enable_distributed_router': False,
  25. 'enable_ha_router': False,
  26. 'enable_lb': False,
  27. 'enable_firewall': False,
  28. 'enable_vpn': False,
  29. 'enable_fip_topology_check': False,
  30. }
    =====================================================
  31. [root@controller ~]# systemctl enable httpd.service memcached.service #启动web 服务器和会话存储服务,并配置它们随系统启动
  32. [root@controller ~]# systemctl restart httpd.service memcached.service

在浏览器中输入 http://controller/dashboard 访问仪表盘(需要浏览器可以解析)

使用"admin"或"demo"用户登录,密码:123456

登录后:

#如果访问网站报500错误,错误日志中报如下错误

解决方法如下:

  1. [root@controller ~]# grep "WSGIApplicationGroup" -B /etc/httpd/conf.d/openstack-dashboard.conf #在WSGISocketPrefix run/wsgi下方添加一行内容 "WSGIApplicationGroup %{GLOBAL}"
  2. WSGISocketPrefix run/wsgi
  3. WSGIApplicationGroup %{GLOBAL}

由于篇幅有限,后续内容见 CentOS7.4安装部署openstack [Liberty版] (二)博客

CentOS7.4安装部署openstack [Liberty版] (一)的更多相关文章

  1. CentOS7.4安装部署openstack [Liberty版] (二)

    继上一篇博客CentOS7.4安装部署openstack [Liberty版] (一),本篇继续讲述后续部分的内容 一.添加块设备存储服务 1.服务简述: OpenStack块存储服务为实例提供块存储 ...

  2. CentOS7.2非HA分布式部署Openstack Pike版 (实验)

    部署环境 一.组网拓扑 二.设备配置 笔记本:联想L440处理器:i3-4000M 2.40GHz内存:12G虚拟机软件:VMware® Workstation 12 Pro(12.5.2 build ...

  3. centos7 下 安装部署nginx

    centos7 下 安装部署nginx 1.nginx安装依赖于三个包,注意安装顺序 a.SSL功能需要openssl库,直接通过yum安装: #yum install openssl b.gzip模 ...

  4. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 kub ...

  5. 在Ubuntu 12.10 上安装部署Openstack

    OpenStack系统有几个关键的项目,它们能够独立地安装但是能够在你的云计算中共同工作.这些项目包括:OpenStack Compute,OpenStack Object Storage,OpenS ...

  6. Docker(2)--Centos7 上安装部署

    Centos7 上安装docker Docker从1.13版本之后采用时间线的方式作为版本号,分为社区版CE和企业版EE. 社区版是免费提供给个人开发者和小型团体使用的,企业版会提供额外的收费服务,比 ...

  7. Centos7下安装部署oracle数据库方法及问题汇总

    目标:在centos7上配置oracle数据库服务器,并在win7上面使用pl/sql成功访问该oracle数据库 系统环境: 服务器:centos7 64位 客户端:win7 64位 注意cneto ...

  8. centos7 showdoc 安装部署

    1.进入showdoc官网帮助目录下 https://www.showdoc.cc/web/#/help?page_id=828455960655160 阅读自动安装部署相关事项: 2.利用xshel ...

  9. CentOS7.5 安装部署Apache+Mysql+Php

    系统:CentOS7.5 安装Apache 安装 yum -y install httpd 开启apache服务 systemctl start httpd.service 设置apache服务开机启 ...

随机推荐

  1. day28 1.缓冲区 2.subprocess 3.黏包现象 4.黏包现象解决方案 5.struct

    1.缓冲区: 输入缓冲区  输出缓冲区 2. subprocess的使用import subprocess sub_obj = subprocess.Popen('ls', #系统指令shell=Tr ...

  2. DocFetcher 本机文件搜索工具

    优点: 支持的文件类型多 全文搜索 可以随时update索引

  3. [转]JDK动态代理

    代理模式         代理模式是常用的java设计模式,他的特征是代理类与委托类有同样的接口,代理类主要负责为委托类预处理消息.过滤消息.把消息转发给委托类,以及事后处理消息等.代理类与委托类之间 ...

  4. 将react升级到15之后的坑

    问题来源: 运用ant-design 的metion组件必须要使用react 15.x以上的版本,而目前所用的版本是 react 0.14.x版本,所以就不得不对react进行升级   出现的问题: ...

  5. 单节点 Elasticsearch 出现 unassigned shards 原因及解决办法

    根本原因: 是因为集群存在没有启用的副本分片,我们先来看一下官网给出的副本分片的介绍: 副本分片的主要目的就是为了故障转移,正如在 集群内的原理 中讨论的:如果持有主分片的节点挂掉了,一个副本分片就会 ...

  6. Eclipse安装插件的“最好方法”:dropins文件夹的妙用

    在Eclipse3.4以前安装插件非常繁琐. 在Eclipse3.5以后插件安装的功能做了改进.而且非常方便易用. 我们只需要把需要的插件复制(拖放)到eclipse\dropins,然后插件就安装成 ...

  7. Elasticsearch的数据导出和导入操作(elasticdump工具),以及删除指定type的数据(delete-by-query插件)

    Elasticseach目前作为查询搜索平台,的确非常实用方便.我们今天在这里要讨论的是如何做数据备份和type删除.我的ES的版本是2.4.1. ES的备份,可不像MySQL的mysqldump这么 ...

  8. Digispark红外接收器

    一.红外协议之NEC协议原理 NEC协议格式: 首次发送的是9ms的高电平脉冲,其后是4.5ms的低电平,接下来就是8bit的地址码(从低有效位开始发),而后是8bit的地址码的反码(主要是用于校验是 ...

  9. 顶级域名和二级域名共享cookie及相互删除cookie

    在CSDN看到一个cookie设置domain时,如何删除的问题, 自己也只知道domain设置为顶级域名时可以被其他二级域名共享,但是如何删除还是有一点搞不清楚,所以特意测试了下cookie和dom ...

  10. 黄聪:移动应用抓包调试利器Charles

    一.Charles是什么?   Charles是在 Mac或Windows下常用的http协议网络包截取工具,是一款屌的不行的抓包工具,在平常的测试与调式过程中,掌握此工具就基本可以不用其他抓包工具了 ...