目录

前言

OpenStake 自动化部署工具一向被视为 OpenStack 发展的重点,现已被市场认可的有 Kolla、TripleO 等优秀工具。但今天我们不聊自动化部署,有兴趣的小伙伴可以浏览《Kolla 让 OpenStack 部署更贴心》。本篇的内容将一反常规,追求极致简约的 OpenStack 手动部署,抛开一切外在因素窥探 OpenStack 的本真。是一篇科普向的 OpenStack 入门介绍。

BTW,OpenStack 研发工程师不能过于依赖自动化部署工具,这会使得对 OpenStack 的理解流于表面。不妨花点时间试着手动部署一次,看看最原始的 OpenStack 到底长什么样子。

官方文档:https://docs.openstack.org/install-guide/

OpenStack 架构

Conceptual architecture

Logical architecture

网络选型

Networking Option 1: Provider networks

The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP service provides IP address information to instances.

The OpenStack user requires more information about the underlying network infrastructure to create a virtual network to exactly match the infrastructure.

WARNING: option lacks support for self-service (private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the self-service networks option below if you desire these features.

Provider networks 是将节点的虚拟网络桥接到运营商的物理网络(e.g. L2/L3 交换机、路由器),是一种比较简单的网络模型,物理网络设备的加入也使得网络具有更高的性能。但由于 Neutron 无需启用 L3 Router 服务,所以也就不能支持 LBaaS、FWaaS 等高级功能。

Networking Option 2: Self-service networks

The self-service networks option augments the provider networks option with layer-3 (routing) services that enable self-service networks using overlay segmentation methods such as VXLAN. Essentially, it routes virtual networks to physical networks using NAT. Additionally, this option provides the foundation for advanced services such as LBaaS and FWaaS.

The OpenStack user can create virtual networks without the knowledge of underlying infrastructure on the data network. This can also include VLAN networks if the layer-2 plug-in is configured accordingly.

Self-service networks(自服务的网络),是一套完整的 L2、L3 网络虚拟化解决方案,用户可以在完全不了解底层物理网络拓扑的情况下创建虚拟网络,Neutron 为用户提供多租户隔离多平面网络。

双节点部署网络拓扑

Controller

  • ens160: 172.18.22.231/24
  • ens192: 10.0.0.1/24
  • ens224: br-provider NIC
  • sba:系统盘
  • sdb:Cinder 存储盘

Compute

  • ens160: 172.18.22.232/24
  • ens192: 10.0.0.2/24
  • sba:系统盘

NOTE:下述 “fanguiju” 均为替换密码。

基础服务

DNS 域名解析

NOTE:我们使用 hosts 文件代替。

  • Controller
[root@controller ~]# cat /etc/hosts
127.0.0.1 controller localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.22.231 controller
172.18.22.232 compute
  • Compute
[root@compute ~]# cat /etc/hosts
127.0.0.1 controller localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.22.231 controller
172.18.22.232 compute

NTP 时间同步

  • Controller
[root@controller ~]# cat /etc/chrony.conf | grep -v ^# | grep -v ^$
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 172.18.22.0/24
logdir /var/log/chrony [root@controller ~]# systemctl enable chronyd.service
[root@controller ~]# systemctl start chronyd.service [root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ ntp1.ams1.nl.leaseweb.net 2 6 77 24 -4781us[-6335us] +/- 178ms
^? static.186.49.130.94.cli> 0 8 0 - +0ns[ +0ns] +/- 0ns
^? sv1.ggsrv.de 2 7 1 17 -36ms[ -36ms] +/- 130ms
^* 124-108-20-1.static.beta> 2 6 77 24 +382us[-1172us] +/- 135ms
  • Compute
[root@compute ~]# cat /etc/chrony.conf | grep -v ^# | grep -v ^$
server controller iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony [root@compute ~]# systemctl enable chronyd.service
[root@compute ~]# systemctl start chronyd.service [root@compute ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller 0 7 0 - +0ns[ +0ns] +/- 0ns

YUM 仓库源

  • Controller & Compute
yum install centos-release-openstack-rocky -y
yum upgrade -y
yum install python-openstackclient -y
yum install openstack-selinux -y

MySQL 数据库

  • Controller
yum install mariadb mariadb-server python2-PyMySQL -y
[root@controller ~]# cat /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 172.18.22.231 default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8 [root@controller ~]# systemctl enable mariadb.service
[root@controller ~]# systemctl start mariadb.service
[root@controller ~]# systemctl status mariadb.service # 初始化 MySQL 数据库密码
[root@controller ~]# mysql_secure_installation

问题:OpenStack 服务的接口响应都很慢,而且会出现 Too many connections 的异常。

TS:OpenStack 众多服务都会访问 MySQL 数据库,所以要对 MySQL 进行一些参数的设置,例如:增加最大连接数量、减少连接等待时间、自动清楚连接数间隔等等。e.g.

[root@controller ~]# cat /etc/my.cnf | grep -v ^$ | grep -v ^#
[client-server]
[mysqld]
symbolic-links=0
max_connections=1000
wait_timeout=5
# interactive_timeout = 600
!includedir /etc/my.cnf.d

RabbitMQ 消息队列

  • Controller
yum install rabbitmq-server -y
[root@controller ~]#  systemctl enable rabbitmq-server.service
[root@controller ~]# systemctl start rabbitmq-server.service
[root@controller ~]# systemctl status rabbitmq-server.service # 初始化 RabbitMQ 用户密码及权限
[root@controller ~]# rabbitmqctl add_user openstack fanguiju
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

问题

Error: unable to connect to node rabbit@localhost: nodedown

DIAGNOSTICS
=========== attempted to contact: [rabbit@localhost] rabbit@localhost:
* connected to epmd (port 4369) on localhost
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed * Hostname mismatch: node "rabbit@controller" believes its host is different. Please ensure that hostnames resolve the same way locally and on "rabbit@controller" current node details:
- node name: 'rabbitmq-cli-50@controller'
- home dir: /var/lib/rabbitmq
- cookie hash: J6O4pu2pK+BQLf1TTaZSwQ==

TS:Hostname mismatch(主机名不匹配),hostname 的修改对于 RabbitMQ 而言还存在滞后的 Cookie,重启操作系统即可解决。

更多 RabbitMQ 参考《快速入门分布式消息队列之 RabbitMQ》。

Memcached

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

  • Controller
yum install memcached python-memcached -y
[root@controller ~]# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
# OPTIONS="-l 127.0.0.1,::1"
OPTIONS="-l 127.0.0.1,::1,controller" [root@controller ~]# systemctl enable memcached.service
[root@controller ~]# systemctl start memcached.service
[root@controller ~]# systemctl status memcached.service

Etcd

OpenStack services may use Etcd, a distributed reliable key-value store for distributed key locking, storing configuration, keeping track of service live-ness and other scenarios.

  • Controller
yum install etcd -y
[root@controller ~]# cat  /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.18.22.231:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.18.22.231:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.18.22.231:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.18.22.231:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.18.22.231:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new" [root@controller ~]# systemctl enable etcd
[root@controller ~]# systemctl start etcd
[root@controller ~]# systemctl status etcd

OpenStack Projects

Keystone(Controller)

Keystone 认证原理请浏览《OpenStack 组件实现原理 — Keystone 认证功能》。

  • 软件包
yum install openstack-keystone httpd mod_wsgi -y
  • 配置
# /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:fanguiju@controller/keystone [token]
provider = fernet
  • 创建 keystone 数据库
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'fanguiju';
  • 初始化 keystone 数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
  • 启用 Fernet key
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Fernet key 的特性请浏览《理解 Keystone 的四种 Token

  • Bootstrap Keystone Services,自动创建 default domain、admin project、admin user (password)、admin role、member role、reader role 以及 keystone service 和 identity endpoint。
keystone-manage bootstrap --bootstrap-password fanguiju \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
  • 配置及启动 Apache HTTP server

    NOTE:Keystone 的 Web Server 依托于 Apache HTTP server,是 httpd 的虚拟主机。
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
# /usr/share/keystone/wsgi-keystone.conf
# keystone 虚拟主机机配置文件 Listen 5000 <VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined <Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost> Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
SetHandler wsgi-script
Options +ExecCGI WSGIProcessGroup keystone-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
# /etc/httpd/conf/httpd.conf
ServerName controller
systemctl enable httpd.service
systemctl start httpd.service
systemctl status httpd.service
  • 创建租户

注入临时身份鉴权变量:

export OS_USERNAME=admin
export OS_PASSWORD=fanguiju
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

NOTE:keystone-manage bootstrap 后就已经完成了 admin 租户及用户的初始化了,现在只需要再创建一个 service project 用于包含 OpenStack Projects(e.g. Nova、Cinder、Neutron)。如果有必要也可以同时创建一个普通租户 Demo 以及该租户下的用户 myuser。e.g.

openstack project create --domain default --description "Service Project" service

openstack project create --domain default --description "Demo Project" myproject
openstack user create --domain default --password-prompt myuser
openstack role create myrole
openstack role add --project myproject --user myuser myrole
[root@controller ~]# openstack domain list
+---------+---------+---------+--------------------+
| ID | Name | Enabled | Description |
+---------+---------+---------+--------------------+
| default | Default | True | The default domain |
+---------+---------+---------+--------------------+ [root@controller ~]# openstack project list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 64e45ce71e4843f3af4715d165f417b6 | service |
| a2b55e37121042a1862275a9bc9b0223 | admin |
| a50bbb6cd831484d934eb03f989b988b | myproject |
+----------------------------------+-----------+ [root@controller ~]# openstack group list [root@controller ~]# openstack user list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| 2cd4bbe862e54afe9292107928338f3f | myuser |
| 92602c24daa24f019f05ecb95f1ce68e | admin |
+----------------------------------+--------+ [root@controller ~]# openstack role list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| 3bc0396aae414b5d96488d974a301405 | reader |
| 811f5caa2ac747a5b61fe91ab93f2f2f | myrole |
| 9366e60815bc4f1d80b1e57d51f7c228 | admin |
| d9e0d3e5d1954feeb81e353117c15340 | member |
+----------------------------------+--------+
  • Verify
[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
> --os-project-domain-name Default --os-user-domain-name Default \
> --os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2019-03-29T12:36:47+0000 |
| id | gAAAAABcngNPjXntVhVAmLbek0MH7ZSzeYGC4cfipy4E3aiy_dRjEyJiPehNH2dkDVI94vHHHdni1h27BJvLp6gqIqglGVDHallPn3PqgZt3-JMq_dyxx2euQL1bhSNX9rAUbBvzL9_0LBPKw2glQmmRli9Qhu8QUz5tRkbxAb6iP7R2o-mU30Y |
| project_id | a2b55e37121042a1862275a9bc9b0223 |
| user_id | 92602c24daa24f019f05ecb95f1ce68e |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 创建 OpenStack client environment scripts
[root@controller ~]# cat adminrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=fanguiju
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2 [root@controller ~]# source adminrc [root@controller ~]# openstack catalog list
+----------+----------+----------------------------------------+
| Name | Type | Endpoints |
+----------+----------+----------------------------------------+
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
+----------+----------+----------------------------------------+

Glance(Controller)

更多 Glance 架构信息请浏览《OpenStack 组件实现原理 — Glance 架构(V1/V2)》。

  • 添加 Glance 用户及其鉴权信息
openstack service create --name glance --description "OpenStack Image" image

openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
+-----------+-----------+-----------------------------------------+
  • 软件包
yum install openstack-glance -y
  • 配置
# /etc/glance/glance-api.conf

[glance_store]
stores = file,http
default_store = file
# 本地的镜像文件存放目录
filesystem_store_datadir = /var/lib/glance/images/ [database]
connection = mysql+pymysql://glance:fanguiju@controller/glance [keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = fanguiju [paste_deploy]
flavor = keystone
# /etc/glance/glance-registry.conf 

[database]
connection = mysql+pymysql://glance:fanguiju@controller/glance [keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = fanguiju [paste_deploy]
flavor = keystone
  • 创建 Glance 数据库
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'fanguiju';
  • 初始化 Glance 数据库
su -s /bin/sh -c "glance-manage db_sync" glance
  • 启动服务
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service
  • Verify
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

openstack image create "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 59355e1b-2342-497b-9863-5c8b9969adf5 | cirros | active |
+--------------------------------------+--------+--------+ [root@controller ~]# ll /var/lib/glance/images/
total 12980
-rw-r-----. 1 glance glance 13287936 Mar 29 10:33 59355e1b-2342-497b-9863-5c8b9969adf5

Nova(Controller)

更多 Nova 信息请浏览《OpenStack 组件部署 — Nova Overview》。

  • 添加 Nova 用户及其鉴权信息
openstack service create --name nova --description "OpenStack Compute" compute

openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 openstack service create --name placement --description "Placement API" placement openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| nova | compute | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
+-----------+-----------+-----------------------------------------+
  • 软件包
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api -y
  • 配置
# /etc/nova/nova.conf
[DEFAULT]
my_ip = 172.18.22.231
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:fanguiju@controller
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database]
connection = mysql+pymysql://nova:fanguiju@controller/nova_api [database]
connection = mysql+pymysql://nova:fanguiju@controller/nova [placement_database]
connection = mysql+pymysql://placement:fanguiju@controller/placement [api]
auth_strategy = keystone [keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = fanguiju [vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip [glance]
api_servers = http://controller:9292 [oslo_concurrency]
lock_path = /var/lib/nova/tmp [placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = fanguiju
  • 创建 Nova 相关数据库
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju'; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'fanguiju';
  • 初始化 Nova API 和 Placement 数据库
su -s /bin/sh -c "nova-manage api_db sync" nova

更多 Placement 信息请浏览《OpenStack Placement Project》。

  • 初始化 Nova 数据库
su -s /bin/sh -c "nova-manage db sync" nova
  • 注册 cell0 数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  • 创建 cell1
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
  • 验证 cell0、cell1
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

更多 Nova Cell 的信息请浏览《Nova Cell V2 详解》。

  • 注册 Placement Web Server 到 httpd
# /etc/httpd/conf.d/00-nova-placement-api.conf

Listen 8778

<VirtualHost *:8778>
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
WSGIScriptAlias / /usr/bin/nova-placement-api
<IfVersion >= 2.4>
ErrorLogFormat "%M"
</IfVersion>
ErrorLog /var/log/nova/nova-placement-api.log
#SSLEngine On
#SSLCertificateFile ...
#SSLCertificateKeyFile ... <Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost> Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
systemctl restart httpd
systemctl status httpd
  • 启动服务
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service \
openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl status openstack-nova-api.service \
openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
  • Verify
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2019-03-29T15:22:51.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-03-29T15:22:52.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2019-03-29T15:22:51.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

Nova(Compute)

NOTE:在我们的规划中,Controller 同时身兼 Compute,所以在 Controller 上依旧要执行下列部署。

NOTE:如果是在虚拟化实验环境,首先要检查虚拟机是否开启了嵌套虚拟化。e.g.

[root@controller ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
16 [root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
16
  • 软件包
yum install openstack-nova-compute -y
  • 配置
# /etc/nova/nova.conf 

[DEFAULT]
my_ip = 172.18.22.232
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:fanguiju@controller
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
instances_path = /var/lib/nova/instances [api_database]
connection = mysql+pymysql://nova:fanguiju@controller/nova_api [database]
connection = mysql+pymysql://nova:fanguiju@controller/nova [placement_database]
connection = mysql+pymysql://placement:fanguiju@controller/placement [api]
auth_strategy = keystone [keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = fanguiju [vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html [glance]
api_servers = http://controller:9292 [oslo_concurrency]
lock_path = /var/lib/nova/tmp [placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = fanguiju [libvirt]
virt_type = qemu
  • 启动服务
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
  • 将 Compute Node 注册到 Cell
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

问题:在 compute 上启动 nova-compute.service 服务时会被卡在,从异常栈看出卡在 nova-compute 与 nova-conductor 的 MQ 通信上,怀疑 compute 上的 nova-compute 与 controller 上的 rabbitmq 通讯出错。用 telnet 测试,果然不通。

[root@compute ~]# telnet 172.18.22.231 5672
Trying 172.18.22.231...
telnet: connect to address 172.18.22.231: No route to host

是防火墙的问题,在 controller 开通 RabbitMQ 的相关端口:

firewall-cmd --zone=public --permanent --add-port=4369/tcp &&
firewall-cmd --zone=public --permanent --add-port=25672/tcp &&
firewall-cmd --zone=public --permanent --add-port=5671-5672/tcp &&
firewall-cmd --zone=public --permanent --add-port=15672/tcp &&
firewall-cmd --zone=public --permanent --add-port=61613-61614/tcp &&
firewall-cmd --zone=public --permanent --add-port=1883/tcp &&
firewall-cmd --zone=public --permanent --add-port=8883/tcp
firewall-cmd --reload

为了方便后面的试验,建议关闭所有防火墙规则:

 systemctl stop firewalld
systemctl disable firewalld
  • Verify

    在 controller 和 compute 都启动了 nova-compute.service 之后,我们拥有了两个计算节点:
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2019-03-29T16:15:42.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-03-29T16:15:44.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2019-03-29T16:15:42.000000 |
| 6 | nova-compute | controller | nova | enabled | up | 2019-03-29T16:15:41.000000 |
| 7 | nova-compute | compute | nova | enabled | up | 2019-03-29T16:15:47.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+ # Check the cells and placement API are working successfully:
[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: API Service Version |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Request Spec Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Console Auths |
| Result: Success |
| Details: None |
+--------------------------------+

Neutron Open vSwitch mechanism driver(Controller)

更多 Neutron 架构及原理请浏览《我非要捅穿这 Neutron



  • 添加 Neutron 用户及其鉴权信息
openstack service create --name neutron --description "OpenStack Networking" network

openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
  • 软件包
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
  • 配置
# /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:fanguiju@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true [database]
connection = mysql+pymysql://neutron:fanguiju@controller/neutron [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = fanguiju [nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = fanguiju [oslo_concurrency]
lock_path = /var/lib/neutron/tmp
# /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
# 因为实验环境 IP 地址不多,所以启动 VxLAN 网络类型
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,l2population [securitygroup]
enable_ipset = true [ml2_type_vxlan]
vni_ranges = 1:1000
# /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]
# 物理网络隐射,OvS Bridge br-provider 需要手动创建
bridge_mappings = provider:br-provider
# OVERLAY_INTERFACE_IP_ADDRESS
local_ip = 10.0.0.1 [agent]
tunnel_types = vxlan
l2_population = True [securitygroup]
firewall_driver = iptables_hybrid
# /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = openvswitch
# The external_network_bridge option intentionally contains no value.
external_network_bridge =
# /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
# /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = fanguiju
# /etc/nova/nova.conf

...

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = fanguiju
service_metadata_proxy = true
metadata_proxy_shared_secret = fanguiju
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  • Open vSwitch
systemctl enable openvswitch
systemctl start openvswitch
systemctl status openvswitch
ovs-vsctl add-br br-provider
ovs-vsctl add-port br-provider ens224
[root@controller ~]# ovs-vsctl show
8ef8d299-fc4c-407a-a937-5a1058ea3355
Bridge br-provider
Port "ens224"
Interface "ens224"
Port br-provider
Interface br-provider
type: internal
ovs_version: "2.10.1"
  • 创建 Neutron 数据库
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'fanguiju';
  • 初始化 Neutron 数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  • 启动服务
systemctl restart openstack-nova-api.service

systemctl enable neutron-server.service \
neutron-openvswitch-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl start neutron-server.service \
neutron-openvswitch-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl status neutron-server.service \
neutron-openvswitch-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
systemctl status neutron-l3-agent.service

NOTE:启动 OvS Agent 的时候会自动创建综合网桥 br-int、隧道网桥 br-tun。手动创建的 br-provider 用于 Flat、VLAN 非隧道类型网络。

[root@controller ~]# ovs-vsctl show
8ef8d299-fc4c-407a-a937-5a1058ea3355
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port int-br-provider
Interface int-br-provider
type: patch
options: {peer=phy-br-provider}
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-provider
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-provider
Interface phy-br-provider
type: patch
options: {peer=int-br-provider}
Port "ens224"
Interface "ens224"
Port br-provider
Interface br-provider
type: internal
ovs_version: "2.10.1"

Neutron Open vSwitch mechanism driver(Compute)

  • 软件包
yum install openstack-neutron-openvswitch ipset -y
  • 配置
# /etc/neutron/neutron.conf

[DEFAULT]
transport_url = rabbit://openstack:fanguiju@controller
auth_strategy = keystone [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = fanguiju [oslo_concurrency]
lock_path = /var/lib/neutron/tmp
# /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]
local_ip = 10.0.0.2 [agent]
tunnel_types = vxlan
l2_population = True
# /etc/nova/nova.conf

...

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = fanguiju
  • Open vSwitch
systemctl enable openvswitch
systemctl start openvswitch
systemctl status openvswitch
  • 启动服务
systemctl restart openstack-nova-compute.service

systemctl enable neutron-openvswitch-agent.service
systemctl start neutron-openvswitch-agent.service
systemctl status neutron-openvswitch-agent.service
[root@compute ~]# ovs-vsctl show
80d8929a-9dc8-411c-8d20-8f1d0d6e2056
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
ovs_version: "2.10.1"

NOTE:因为现在我们只是启用了 VxLAN 类型网络,所以只需要 OvS br-tun、br-int,而无需 br-provider。

  • Verify
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 41925586-9119-4709-bc23-4668433bd413 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| 43281ac1-7699-4a81-a5b6-d4818f8cf8f9 | Open vSwitch agent | controller | None | :-) | UP | neutron-openvswitch-agent |
| b815e569-c85d-4a37-84ea-7bdc5fe5653c | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| d1ef7214-d26c-42c8-ba0b-2a1580a44446 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent |
| f55311fc-635c-4985-ae6b-162f3fa8f886 | Open vSwitch agent | compute | None | :-) | UP | neutron-openvswitch-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

Horizon(Controller)

  • 软件包
yum install openstack-dashboard -y
  • 配置
# /etc/openstack-dashboard/local_settings

...
OPENSTACK_HOST = "controller"
...
# Allow all hosts to access the dashboard
ALLOWED_HOSTS = ['*', ]
...
# Configure the memcached session storage service
SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
...
# Enable the Identity API version 3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
...
# Enable support for domains
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
...
# Configure API versions
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
...
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
...
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
...
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_ha_router': False,
'enable_fip_topology_check': True,
'supported_vnic_types': ['*'],
'physical_networks': [],
}
# /etc/httpd/conf.d/openstack-dashboard.conf

WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
Options All
AllowOverride All
Require all granted
</Directory> <Directory /usr/share/openstack-dashboard/static>
Options All
AllowOverride All
Require all granted
</Directory>
  • 启动服务
systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service

Cinder(Controller)

  • 准备 LVM 后端存储
yum install lvm2 device-mapper-persistent-data -y

# /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sdb/", "r/.*/"] systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
  • 添加 Cinder 用户及其鉴权信息
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 openstack user create --domain default --password-prompt cinder
openstack role add --project service --user cinder admin openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack catalog list
+-----------+-----------+------------------------------------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+------------------------------------------------------------------------+
| nova | compute | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| cinderv2 | volumev2 | RegionOne |
| | | admin: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | public: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | internal: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | |
| neutron | network | RegionOne |
| | | internal: http://controller:9696 |
| | | RegionOne |
| | | admin: http://controller:9696 |
| | | RegionOne |
| | | public: http://controller:9696 |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
| cinderv3 | volumev3 | RegionOne |
| | | internal: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | admin: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | public: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | |
+-----------+-----------+------------------------------------------------------------------------+
  • 软件包
yum install openstack-cinder targetcli python-keystone -y
  • 配置
# /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 172.18.22.231
enabled_backends = lvm
auth_strategy = keystone
transport_url = rabbit://openstack:fanguiju@controller
glance_api_servers = http://controller:9292 [database]
connection = mysql+pymysql://cinder:fanguiju@controller/cinder [keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = fanguiju [oslo_concurrency]
lock_path = /var/lib/cinder/tmp [lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
# /etc/nova/nova.conf

[cinder]
os_region_name = RegionOne
  • 创建 Cinder 数据库
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'fanguiju';
  • 初始化 Cinder 数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
  • 启动服务
systemctl restart openstack-nova-api.service

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
  • Verify
[root@controller ~]# openstack volume service list
+------------------+----------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2019-04-25T09:26:49.000000 |
| cinder-volume | controller@lvm | nova | enabled | up | 2019-04-25T09:26:49.000000 |
+------------------+----------------+------+---------+-------+----------------------------+

最后

至此,最精简的 OpenStack 手动部署就完成了,可以尝试启动一台 Boot from Image 的虚拟机。Launch Instance 的常规操作这里不再赘述。在此基础上,我们还可以继续部署 Cinder 项目,不过该文的初衷是作为进一步讨论 Open vSwitch in Neutron 的基础环境搭建,这里先不做展开。希望小伙伴们通过手动部署的过程可以感受到 OpenStack 最原始的状态其实并不复杂,全局理解每个组件所属位置及其充当的角色义务才是关键。正如上面提到,后期会继续更新 Open vSwitch in Neutron 的内容,敬请期待 https://is-cloud.blog.csdn.net

手动部署 OpenStack Rocky 双节点的更多相关文章

  1. Ubuntu 18.04.1 LTS + kolla-ansible 部署 openstack Rocky all-in-one 环境

    1. kolla 项目介绍 简介 kolla 的使命是为 openstack 云平台提供生产级别的.开箱即用的自动化部署能力. kolla 要实现 openetack 部署分为两步,第一步是制作 do ...

  2. 手动部署 Ceph Mimic 三节点

    目录 文章目录 目录 前文列表 部署拓扑 存储设备拓扑 网络拓扑 基础系统环境 安装 ceph-deploy 半自动化部署工具 部署 MON 部署 Manager 部署 OSD 部署 MDS 部署 R ...

  3. kolla-ansible部署openstack allinone单节点

    环境准备 2 network interfaces 8GB main memory 40GB disk space 1.修改hostname hostnamectl set-hostname koll ...

  4. Centos 7.4下 部署openstack Queens 计算节点qemu高版本问题

    sed -i 's/$contentdir/centos/g' /etc/yum.repos.d/CentOS-QEMU-EV.repo 这样既可正常安装compute服务

  5. 脚本安装Rocky版OpenStack 1控制节点+1计算节点环境部署

    视频安装指南请访问: http://39.96.203.138/wordpress/document/%E8%84%9A%E6%9C%AC%E5%AE%89%E8%A3%85rocky%E7%89%8 ...

  6. Ubuntu系统上双节点部署OpenStack

    安装和部署双节点OpenStack 介绍: 1.宿主机:Win10操作系统 2.在VMware下创建两台虚拟机: devstack-controller:控制节点 + 网络节点 + 块存储节点 + 计 ...

  7. CentOS7安装OpenStack(Rocky版)-01.控制节点的系统环境准备

    分享一下Rocky版本的OpenStack安装管理经验: OpenStack每半年左右更新一版,目前是版本是201808月发布的版本-R版(Rocky),目前版本安装方法优化较好,不过依然是比较复杂 ...

  8. CentOS7安装OpenStack(Rocky版)-05.安装一个nova计算节点实例

    上一篇文章分享了控制节点的nova计算服务的安装方法,在实际生产环境中,计算节点通常会安装一些单独的节点提供服务,本文分享单独的nova计算节点的安装方法 ----------------  完美的分 ...

  9. OpenStack(四)——使用Kolla部署OpenStack多节点云

    (1).实验环境 主机名 IP地址 角色 内存 网卡 CPU 磁盘 OpenStack-con 192.168.128.110 controller(控制) 8G 桥接网卡ens32和ens33 4核 ...

随机推荐

  1. three.js之创建坐标系网格

    <!DOCTYPE html> <html> <head> <meta charset=utf-8> <title>My first thr ...

  2. Matrix Factorization in RecSys

    矩阵分解在推荐系统中的应用. 参考链接:知乎. 传统SVD,Funk-SVD,Bias-SVD,SVD++. SVD奇异值分解及其意义. 漫谈奇异值分解.

  3. ubuntu学习笔记-tar 解压缩命令详解(转)

    tar 解压缩命令详解 -c: 建立压缩档案 -x:解压-t:查看内容-r:向压缩归档文件末尾追加文件-u:更新原压缩包中的文件 这五个是独立的命令,压缩解压都要用到其中一个,可以和别的命令连用但只能 ...

  4. React组件:拖拽布局Dragact v0.1.6 发布

    仓库地址:Dragact爽滑的拖拽组件 大家好,新年已经过去,大家又投入了繁忙的工作当中,由于我在国外,因此压根儿没有休息... 少说废话,上周一周的时间里,我陆陆续续的为Dragact组件进行了一系 ...

  5. 移动/Web开发必备工具!DevExtreme v19.1.7火热发布

    DevExtreme Complete Subscription是性能最优的 HTML5,CSS 和 JavaScript 移动.Web开发框架,可以直接在Visual Studio集成开发环境,构建 ...

  6. es6 模块编译 *** is not function

    今天学习vuejs,里面用到了es6的写法,遇到了一个很怪的问题,不知道有人遇到么. 安装的模块引用:import Vue from 'vue';(注意,Vue处没有{},如果加上这个就报错Uncau ...

  7. django之表多对多建立方式、form组件、钩子函数 08

    目录 多对多三种创建方式 1.全自动(用ManyToManyField创建第三张表) 2.纯手写 3.半自动 form组件 引入 form组件的使用 forms组件渲染标签 form表单展示信息 fo ...

  8. VUE: 移动端长按弹出确认删除地址(后面测试发现IOS有BUG,后面有更新随笔,更新后的亲测有效)

    收货地址的删除方式可能有很多种,我目前见过的暂时只有两种(1.在编辑页删除  2.长按某一条收货地址弹出是否删除地址) 在开发的项目上要求第二种删除方法,于是记录一下我写的代码 ~ 1.首先,在移动端 ...

  9. word文档在线预览地址

    文档网址 http://www.officeweb365.com/Default/Docview 对接 http://ow365.cn/?i=19604&furl=http:://www.ba ...

  10. TTTTTTTTTTTTTTTTTTT UVA 2045 Richness of words

    J - Richness of words Time Limit:500MS     Memory Limit:65536KB     64bit IO Format:%I64d & %I64 ...