1. 通过登录界面dashboard或命令行CLI通过RESTful API向keystone获取认证信息。
2. keystone通过用户请求认证信息,并生成auth-token返回给对应的认证请求。
3. 然后携带auth-token通过RESTful API向nova-api发送一个boot instance的请求。
4. nova-api接受请求后向keystone发送认证请求,查看token是否为有效用户和token。
5. keystone验证token是否有效,将结果返回给nova-api。
6. 通过认证后nova-api和数据库通讯,初始化新建虚拟机的数据库记录。
7. nova-api调用rabbitmq,向nova-scheduler请求是否有创建虚拟机的资源(node主机)。
8. nova-scheduler进程侦听消息队列,获取nova-api的请求。
9. nova-scheduler通过查询nova数据库中计算资源的情况,并通过调度算法计算符合虚拟机创建需要的主机。
10. 对于有符合虚拟机创建的主机,nova-scheduler更新数据库中虚拟机对应的物理主机信息。
11. nova-scheduler通过rpc调用向nova-compute发送对应的创建虚拟机请求的消息。
12. nova-compute会从对应的消息队列中获取创建虚拟机请求的消息。
13. nova-compute通过rpc调用向nova-conductor请求获取虚拟机消息。(Flavor)
14. nova-conductor从消息队队列中拿到nova-compute请求消息。
15. nova-conductor根据消息查询虚拟机对应的信息。
16. nova-conductor从数据库中获得虚拟机对应信息。
17. nova-conductor把虚拟机信息通过消息的方式发送到消息队列中。
18. nova-compute从对应的消息队列中获取虚拟机信息消息。
19. nova-compute请求glance-api获取创建虚拟机所需要镜像。
20. glance-api向keystone认证token是否有效,并返回验证结果。
21. token验证通过,nova-compute获得虚拟机镜像信息(URL)。
22. nova-compute请求neutron-server获取创建虚拟机所需要的网络信息。
23. neutron-server向keystone认证token是否有效,并返回验证结果。
24. token验证通过,nova-compute获得虚拟机网络信息。
25. nova-compute请求cinder-api获取创建虚拟机所需要的持久化存储信息。
26. cinder-api向keystone认证token是否有效,并返回验证结果。
27. token验证通过,nova-compute获得虚拟机持久化存储信息。
28. nova-compute根据instance的信息调用配置的虚拟化驱动来创建虚拟机。
controller1 10.1.36.21 Trunk Trunk 管理节点
yum install -y vim net-tools wget lrzsz tree screen lsof tcpdump nmap bridge-utils
---------------------------------------------------------------------------------------------------------------
我们的MySQL集群部署在openstack的控制节点10.1.36.21,10.1.36.22 ,10.1.36.23上,如果资源比较富裕可以把数据库单独部署在三个独立的服务器上。
将此文件复制到mariadb-2、mariadb-3,注意要把 wsrep_node_name 和 wsrep_node_address 改成相应节点的 hostname 和 ip。示范如下
tcp 0 0 0.0.0.0:4567 0.0.0.0:* LISTEN 17908/mysqld
tcp 0 0 10.1.36.21:3306 0.0.0.0:* LISTEN 17908/mysqld
mysql -u root -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e " GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '04aea9de5f79'; "
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '04aea9de5f79';"
scp /var/lib/rabbitmq/.erlang.cookie root@10.1.36.22:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie root@10.1.36.23:/var/lib/rabbitmq/.erlang.cookie
在controller1(磁盘节点)节点上做添加用户和添加管理插件的操作
添加 openstack 用户:
[root@controller1 ~]# rabbitmqctl add_user openstack 04aea9de5f79
注:在执行此操作时确保主机名和/etc/hosts里显示的一致,要不会操作失败并报错
用合适的密码替换 RABBIT_DBPASS。
给``openstack``用户配置写和读权限:
[root@controller1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
---------------------------------------------------------------------------------------
再次查看监听的端口:web管理端口:15672
# netstat -lntup | grep 5672
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 4900/beam.smp
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 4900/beam.smp
tcp6 0 0 :::5672 :::* LISTEN 4900/beam.smp
---------------------------------------------------------------------------------------
web端打开10.1.36.28:15672 用户名 guest 密码 guest
登录进去之后:
Admin------->复制administrator------->点击openstack------>Update this user-------->
Tags:粘帖administrator--------->密码都设置为04aea9de5f79-------->logout
然后在登陆:用户名 openstack 密码 04aea9de5f79
安装Memcached
每个控制节点都需要安装Memcached
服务的身份服务身份验证机制使用Memcached来缓存令牌。memcached服务通常在控制器节点上运行。对于生产部署,我们建议启用防火墙,身份验证和加密的组合以保护它。
安装和配置组件
1. 安装包:
[root@controller1 ~]# yum install -y memcached python-memcached
2. 编辑/etc/sysconfig/memcached文件并完成以下操作:
* 配置服务以使用控制器节点的管理IP地址。这是为了通过管理网络启用其他节点的访问:
OPTIONS="-l 0.0.0.0,::1"
其他配置修改:
# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="4096"
CACHESIZE="1024"
OPTIONS="-l 0.0.0.0,::1"
完成安装
* 启动Memcached服务并将其配置为在系统引导时启动:
systemctl enable memcached.service
systemctl start memcached.service
第五章 OpenStack验证服务KeyStone
---------------------------------------------------------------------------------------
Keystone作用:用户与认证:用户权限与用户行为跟踪:
服务目录:提供一个服务目录,包括所有服务项与相关Api的端点
User:用户 Tenant:租户 项目 Token:令牌 Role:角色 Service:服务 Endpoint:端点
----------------------------------------------------------------------------------------
1.安装keystone
[root@controller1 ~]# yum install -y openstack-keystone httpd mod_wsgi
[root@controller1 ~]# openssl rand -hex 10 ----生成随机码
dc46816a3e103ec2a700
编辑文件 /etc/keystone/keystone.conf 并完成如下动作:
在``[DEFAULT]``部分,定义初始管理令牌的值:
[DEFAULT]
...
admin_token = ADMIN_TOKEN
使用前面步骤生成的随机数替换``ADMIN_TOKEN`` 值。
在 [database] 部分,配置数据库访问:
[database]
...
将``KEYSTONE_DBPASS``替换为你为数据库选择的密码。
在``[token]``部分,配置Fernet UUID令牌的提供者。
[token]
...
provider = fernet
初始化身份认证服务的数据库:
完成后/etc/keystone/keystone.conf的配置
[root@controller1 ~]# grep -vn '^$\|^#' /etc/keystone/keystone.conf
[DEFAULT]
admin_token = dc46816a3e103ec2a700
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:04aea9de5f79@10.1.36.28/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[federation]
[fernet_tokens]
[healthcheck]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[memcache]
servers=10.1.36.28:11211
[oauth1]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[profiler]
[resource]
[revoke]
[role]
[saml]
[security_compliance]
[shadow_users]
[signing]
[token]
provider = fernet
driver = memcache
[tokenless_auth]
[trust]
注:配置中的ip地址,没有特殊需求尽量配置为VIP地址,这样做是基于集群高可用的目的,后面的各种配置出现10.1.36.28这个IP就不在累述了。
-----------------------------------------------------------------------------------------------
同步数据库:注意权限,所以要用su -s 切换到keystone用户下执行:
[root@controller1 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
[root@controller1 ~]# tail /var/log/keystone/keystone.log #查看安装是否有错误
2020-05-12 09:54:25.345 31846 INFO migrate.versioning.api [-] 56 -> 57...
2020-05-12 09:54:25.356 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.357 31846 INFO migrate.versioning.api [-] 57 -> 58...
2020-05-12 09:54:25.368 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.368 31846 INFO migrate.versioning.api [-] 58 -> 59...
2020-05-12 09:54:25.380 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.381 31846 INFO migrate.versioning.api [-] 59 -> 60...
2020-05-12 09:54:25.392 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.393 31846 INFO migrate.versioning.api [-] 60 -> 61...
2020-05-12 09:54:25.404 31846 INFO migrate.versioning.api [-] done
[root@controller1 ~]# chown -R keystone:keystone /var/log/keystone/keystone.log 这个可以选
[root@controller1 keystone]# mysql -ukeystone -p04aea9de5f79 keystone -e "use keystone;show tables;"
+-----------------------------+
| Tables_in_keystone |
+-----------------------------+
| access_token |
| application_credential |
| application_credential_role |
| assignment |
| config_register |
| consumer |
| credential |
| endpoint |
| endpoint_group |
| federated_user |
| federation_protocol |
| group |
| id_mapping |
| identity_provider |
| idp_remote_ids |
| implied_role |
| limit |
| local_user |
| mapping |
| migrate_version |
| nonlocal_user |
| password |
| policy |
| policy_association |
| project |
| project_endpoint |
| project_endpoint_group |
| project_tag |
| region |
| registered_limit |
| request_token |
| revocation_event |
| role |
| sensitive_config |
| service |
| service_provider |
| system_assignment |
| token |
| trust |
| trust_role |
| user |
| user_group_membership |
| user_option |
| whitelisted_config |
+-----------------------------+
表已创建完毕,OK
注:如果表没有创建成功,一定要查看日志,一般是配置中的数据库没有没有写对导致表无法创建,日志在 /var/log/keystone/keystone.log这个文件中。
初始化Fernet keys:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
注意:初始化Fernet keys是控制节点都需要的,同步keystone数据库在顺便一台控制节点上跑一下就好
看到下面的的文件夹fernet-keys和credential-keys就说明上面初始化命令已经完成
[root@controller1 ~]# ls -lh /etc/keystone/
total 136K
drwx------. 2 keystone keystone 24 Feb 28 14:16 credential-keys
-rw-r-----. 1 root keystone 2.3K Nov 1 06:24 default_catalog.templates
drwx------. 2 keystone keystone 24 Feb 28 14:16 fernet-keys
-rw-r-----. 1 root keystone 114K Feb 28 14:14 keystone.conf
-rw-r-----. 1 root keystone 2.5K Nov 1 06:24 keystone-paste.ini
-rw-r-----. 1 root keystone 1.1K Nov 1 06:24 logging.conf
-rw-r-----. 1 root keystone 3 Nov 1 17:21 policy.json
-rw-r-----. 1 keystone keystone 665 Nov 1 06:24 sso_callback_template.html
拷贝配置到其他二台控制节点(keystone,glance,nova,neturon等等的配置三个控制节点一定要保持一样,后面不再详写)
打包压缩controller1的/etc/keystone/目录,传送到其他的控制节点
cd /etc/keystone
tar czvf keystone-controller1.tar.gz ./*
scp keystone-controller1.tar.gz root@10.1.36.22:/etc/keystone/
scp keystone-controller1.tar.gz root@10.1.36.23:/etc/keystone/
----------------------------------------------------------------------------------
配置 Apache HTTP 服务器
编辑``/etc/httpd/conf/httpd.conf`` 文件,配置``ServerName`` 选项为控制节点:
Listen 0.0.0.0:80
ServerName localhost:80
必须要配置httpd的ServerName,否则keystone服务不能起来
拷贝配置到其他控制节点
scp /etc/httpd/conf/httpd.conf root@10.1.36.22:/etc/httpd/conf/
scp /etc/httpd/conf/httpd.conf root@10.1.36.23:/etc/httpd/conf/
下面的内容为文件 /etc/httpd/conf.d/wsgi-keystone.conf的内容。并用apache来代理它:5000 正常的api来访问 35357 管理访问的端口
[root@controller1 ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@controller1 ~]# vim /etc/httpd/conf.d/wsgi-keystone.conf
[root@controller1 ~]# cat /etc/httpd/conf.d/wsgi-keystone.conf
Listen 0.0.0.0:5000
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup keystone-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
拷贝配置到其他控制节点
scp /etc/httpd/conf.d/wsgi-keystone.conf root@10.1.36.22:/etc/httpd/conf.d/
scp /etc/httpd/conf.d/wsgi-keystone.conf root@10.1.36.23:/etc/httpd/conf.d/
---------------------------------------------------------------------------------------------------
启动 Apache HTTP 服务并配置其随系统启动:
[root@controller1 ~]# systemctl enable httpd.service && systemctl start httpd.service
---------------------------------------------------------------------------------------------------
查看端口:
[root@controller1 ~]# netstat -lntup|grep httpd
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 10038/httpd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 10038/httpd
查看日志/var/log/keystone/keystone.log
没有ERROR说明keystone启动正常
[root@controller1 ~]# tail -n 20 /var/log/keystone/keystone.log
2020-05-08 17:10:24.056 8156 INFO migrate.versioning.api [-] 43 -> 44...
2020-05-08 17:10:24.069 8156 INFO migrate.versioning.api [-] done
2020-05-08 17:12:33.635 8258 INFO keystone.common.token_utils [-] key_repository does not appear to exist; attempting to create it
2020-05-08 17:12:33.635 8258 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
2020-05-08 17:12:33.636 8258 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
2020-05-08 17:12:33.636 8258 INFO keystone.common.token_utils [-] Starting key rotation with 1 key files: ['/etc/keystone/fernet-keys/0']
2020-05-08 17:12:33.636 8258 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Current primary key is: 0
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Next primary key will be: 1
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Promoted key 0 to be the primary: 1
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
2020-05-08 17:12:41.854 8271 INFO keystone.common.token_utils [-] key_repository does not appear to exist; attempting to create it
2020-05-08 17:12:41.855 8271 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
2020-05-08 17:12:41.855 8271 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/credential-keys/0
2020-05-08 17:12:41.855 8271 INFO keystone.common.token_utils [-] Starting key rotation with 1 key files: ['/etc/keystone/credential-keys/0']
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Current primary key is: 0
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Next primary key will be: 1
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Promoted key 0 to be the primary: 1
2020-05-08 17:12:41.857 8271 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/credential-keys/0
进行后面操作必须要保障keystone的api和管理访问端口正常,keystone的api和管理访问端口是否正常可以通过如下访问web页面的方式完成
---------------------------------------------------------------------------------------------------
创建验证用户及地址版本信息:
[root@controller1 ~]# grep -n '^admin_token' /etc/keystone/keystone.conf
18:admin_token = dc46816a3e103ec2a700
[root@controller1 ~]# export OS_TOKEN=dc46816a3e103ec2a700 -------设置环境变量
[root@controller1 ~]# export OS_IDENTITY_API_VERSION=3
[root@controller1 ~]# env|grep ^OS #查看是否设置成功
OS_IDENTITY_API_VERSION=3
OS_TOKEN=dc46816a3e103ec2a700
[root@controller1 ~]# openstack domain list #验证一下,没有输出是对的,因为我们还没有创建,如果出现错误,请查看日志解决
创建域、项目、用户和角色
身份认证服务为每个OpenStack服务提供认证服务。认证服务使用 T domains, projects (tenants), :term:`users<user>`和 :term:`roles<role>`的组合。
创建默认域
openstack domain create --description "Default Domain" default
在你的环境中,为进行管理操作,创建管理的项目、用户和角色:
创建 admin 项目:
openstack project create --domain default --description "Admin Project" admin
注解
OpenStack 是动态生成 ID 的,因此您看到的输出会与示例中的命令行输出不相同。
创建 admin 用户:
openstack user create --domain default --password-prompt admin
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | ae9d80f6b8f94403ac1ddf0ff2cad01e |
| enabled | True |
| id | efe2970c7ab74c67a4aced146cee3fb0 |
| name | admin |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
密码设置为了04aea9de5f79
创建 admin 角色:
openstack role create admin
添加``admin`` 角色到 admin 项目和用户上,并授权admin的角色:
openstack role add --project admin --user admin admin
注解
这个命令执行后没有输出。
注解
扩展:最好把注册时已经添加的admin用户删除,因为你不知道密码...
创建``demo`` 项目:
openstack project create --domain default --description "Demo Project" demo
注解
当为这个项目创建额外用户时,不要重复这一步。
创建``demo`` 用户:
openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | ae9d80f6b8f94403ac1ddf0ff2cad01e |
| enabled | True |
| id | e40023738a1e40e8b3fc6fd3bee7dae7 |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
密码设置为了04aea9de5f79
创建 user 角色:
openstack role create user
添加 user``角色到 ``demo 项目和用户:
openstack role add --project demo --user demo user
本指南使用一个你添加到你的环境中每个服务包含独有用户的service 项目。创建``service``项目:
openstack project create --domain default --description "Service Project" service
快速粘贴命令行
export OS_TOKEN=dc46816a3e103ec2a700
export OS_URL=http://10.1.36.28:5000/v3
export OS_IDENTITY_API_VERSION=3
openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default --password-prompt admin
openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password-prompt demo
openstack role create user
openstack role add --project demo --user demo user
openstack project create --domain default --description "Service Project" service
--------------------------------------------------------------------------------------------------
查看创建的用户及角色:
[root@controller1 ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 2b3676307efa44759e21b0ac0b84dd7d | admin |
| 9813446ed72a4d548425ab5567f7ac42 | demo |
+----------------------------------+-------+
[root@controller1 ~]# openstack role list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 5486767f05c74584b327b3ec8b808966 | user |
| a4e5cf4725574da5b01d6a351026a66b | admin |
+----------------------------------+-------+
[root@controller1 ~]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 1633afbf896341178c61d563a461cd47 | service |
| 445adc5d8a7e49a693530192fb8fb4c2 | admin |
| 7a42622b277a48baaa80a38571f0c5ac | demo |
+----------------------------------+---------+
-------------------------------------------------------------------------------------------------
创建glance用户:
openstack user create --domain default --password=04aea9de5f79 glance
将此用户加入到项目里面并给它赋予admin的权限:
openstack role add --project service --user glance admin
创建nova用户:
openstack user create --domain default --password=04aea9de5f79 nova
openstack role add --project service --user nova admin
创建nova[placement]用户:
openstack user create --domain default --password=04aea9de5f79 placement
openstack role add --project service --user placement admin
创建neutron用户:
openstack user create --domain default --password=04aea9de5f79 neutron
openstack role add --project service --user neutron admin
引导身份服务:
keystone-manage bootstrap --bootstrap-password admin \
--bootstrap-region-id RegionOne
这个步骤有可能不需要
如果这步出错,如你写错了域名或端口等,会无法创建下面的domain, projects, users and roles, 重新配置是不能解决的,它不会覆盖前面的配置,解决办法是如下:
MariaDB [keystone]> select * from endpoint;
+----------------------------------+--------------------+-----------+----------------------------------+----------------------------+-------+---------+-----------+
| id | legacy_endpoint_id | interface | service_id | url | extra | enabled | region_id |
+----------------------------------+--------------------+-----------+----------------------------------+----------------------------+-------+---------+-----------+
| 94f2003fb6f34c50828177fb5bfa0724 | NULL | public | d11569bcab004ad3b0b2de12b5e363c9 | http://10.1.36.28:9292 | {} | 1 | RegionOne |
| b7dc83fbd2f24f48a26e6fd392bcda27 | NULL | internal | a698441d64a94ed888fc97087428af74 | http://10.1.36.28:5000/v3 | {} | 1 | RegionOne |
| b86828a2a2c44f53abd1d67176b3cadc | NULL | public | a698441d64a94ed888fc97087428af74 | http://10.1.36.28:5000/v3 | {} | 1 | RegionOne |
| c21ec48a677d44fab2422ba77d53ca94 | NULL | internal | d11569bcab004ad3b0b2de12b5e363c9 | http://10.1.36.28:9292 | {} | 1 | RegionOne |
| ecc28f07128c4723bc5f5363fbc385f3 | NULL | admin | a698441d64a94ed888fc97087428af74 | http://10.1.36.28:35357/v3 | {} | 1 | RegionOne |
| f328b0b8a9b942ce9dffd06b6eaa740a | NULL | admin | d11569bcab004ad3b0b2de12b5e363c9 | http://10.1.36.28:9292 | {} | 1 | RegionOne |
+----------------------------------+--------------------+-----------+----------------------------------+----------------------------+-------+---------+-----------+
6 rows in set (0.00 sec)
MariaDB [keystone]> delete from endpoint where url like '%36.28%';
Query OK, 6 rows affected (0.01 sec)
MariaDB [keystone]> select * from endpoint;
Empty set (0.00 sec)
处理完成后,重新配置上面的步骤
创建服务实体和API端点
在你的Openstack环境中,认证服务管理服务目录。服务使用这个目录来决定您的环境中可用的服务。
创建服务实体和身份认证服务:
[root@controller1 ~]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | ab1131690e2a4787b3a4282c07327250 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
注解
OpenStack 是动态生成 ID 的,因此您看到的输出会与示例中的命令行输出不相同。
身份认证服务管理了一个与您环境相关的 API 端点的目录。服务使用这个目录来决定如何与您环境中的其他服务进行通信。
OpenStack使用三个API端点变种代表每种服务:admin,internal和public。默认情况下,管理API端点允许修改用户和租户而公共和内部APIs不允许这些操作。在生产环境中,处于安全原因,变种为了服务不同类型的用户可能驻留在单独的网络上。对实例而言,公共API网络为了让顾客管理他们自己的云在互联网上是可见的。管理API网络在管理云基础设施的组织中操作也是有所限制的。内部API网络可能会被限制在包含OpenStack服务的主机上。此外,OpenStack支持可伸缩性的多区域。为了简单起见,本指南为所有端点变种和默认``RegionOne``区域都使用管理网络。
创建认证服务的 API 端点:
验证操作
在安装其他服务之前确认身份认证服务的操作。
注解
在控制节点上执行这些命令。
[root@controller1 ~]# unset OS_TOKEN OS_URL
[root@controller1 ~]# openstack --os-auth-url
http://10.1.36.28:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
Password:
+------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------+
| expires | 2020-05-08T12:25:45+0000 |
| id | gAAAAABetUG5qEjRd4eIeiIfkTxVWrtMFQ_M7bvZ-GFGsguCOjeOs9GFgJJtPhWcgLOmDrYpnzO44nY5E- |
| | _H3KleSFOg9vnEqVb_ljbFe1dJ5mYXCcoLKaFZL- |
| | JlM6g7_gdKtNsqGANNzm3jf_rB42Yt2FG9MMbr9iL7dPgjI18MldQP2vrD4gU |
| project_id | 445adc5d8a7e49a693530192fb8fb4c2 |
| user_id | 2b3676307efa44759e21b0ac0b84dd7d |
+------------+--------------------------------------------------------------------------------------------------+
到此处说明keystone已经成功了
[root@controller1 ~]# openstack --os-auth-url
http://10.1.36.28:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue
Password:
+------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------+
| expires | 2020-05-08T12:26:45+0000 |
| id | gAAAAABetUH1qGNgacWyT76IQRNCcmRGFxJ- |
| | Fji2Vl23eBtqpppIwFxRqAqXWJH23V4jD7IkhBpTVu5bIPUhEgq6Tof2HmBN3dAlDbohKI1vEyKRJw9QUDZB9_- |
| | 31sO_k96GcIOVrUD_OcEGhjcSsWUnylGMVIQsYCBwiIn1dyl1H_A0oxSwTsI |
| project_id | 7a42622b277a48baaa80a38571f0c5ac |
| user_id | 9813446ed72a4d548425ab5567f7ac42 |
+------------+--------------------------------------------------------------------------------------------------+
创建 OpenStack 客户端环境脚本
创建 admin 和 ``demo``项目和用户创建客户端环境变量脚本
[root@controller1 ~]# vim admin-openstack.sh
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=04aea9de5f79
export OS_AUTH_URL=http://10.1.36.28:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@controller1 ~]# source admin-openstack.sh
注1:如果我想修改用户admin的密码,可以使用这个命令openstack user password set --password 04aea9de5f79来修改当前用户的密码为04aea9de5f79
注2:如果你要更改不同用户的密码,可以使用这个命令,已更换admin用户密码为例:openstack user set --password 04aea9de5f79 admin
[root@controller1 ~]# openstack token issue
+------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------+
| expires | 2020-05-08T12:25:04+0000 |
| id | gAAAAABetUGQgF-ERb2G_km7emcwzZszP3Cd8RYCN38RMkY4lyom0P2AqK6o4MzUoxwRHvn_lHq0wHu_42RicpXRRiZ4lDG1 |
| | fFB0ecLZW9Q6dAP9OUmQvZkoDv3IybNcjAStw6vzu128syVEW_BjgVrK_LuCl5ZVgk5Z8wEY_SwfozHsnSA6JWA |
| project_id | 445adc5d8a7e49a693530192fb8fb4c2 |
| user_id | 2b3676307efa44759e21b0ac0b84dd7d |
+------------+--------------------------------------------------------------------------------------------------+
[root@controller1 ~]# vim demo-openstack.sh
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=04aea9de5f79
export OS_AUTH_URL=http://10.1.36.28:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@controller1 ~]# source demo-openstack.sh
[root@controller1 ~]# openstack token issue --fit-width
+------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------+
| expires | 2020-05-08T12:24:25+0000 |
| id | gAAAAABetUFpzlG4u8ur9Exxr6XOtSx3ms9KUcoMkIR-GC8pp27gN530Ytj5bdUP99ETep_NstODWHs1YvVihGH3HDnDmq- |
| | iE45sdGdfU-Ic603f4w-JQjd8mtSeJLDIFVUDe4nbW1lA_OukWKhYl9DerU72sV0h_5sqmMW-Qi1-VUQIsd4ftOQ |
| project_id | 7a42622b277a48baaa80a38571f0c5ac |
| user_id | 9813446ed72a4d548425ab5567f7ac42 |
+------------+--------------------------------------------------------------------------------------------------+
第四章 OpenStack镜像服务Glance
glance主要由三个部分组成:glance-api、glance-registry以及image store
glance-api:接受云系统镜像的创建、删除、读取请求
glance-registry:云系统的镜像注册服务
1.先决条件
glance服务创建:
source admin-openstack.sh
openstack service create --name glance --description "OpenStack Image service" image
创建镜像服务的 API 端点:
[root@controller1 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
| 0eb8a8875c17452eb1a32053fafa95c8 | RegionOne | keystone | identity | True | public | http://10.1.36.28:5000/v3 |
| 2d172275ea58402bbfd7bf58b2c00260 | RegionOne | glance | image | True | public | http://10.1.36.28:9292 |
| 45f475af73b84dd092da35e3a4844234 | RegionOne | glance | image | True | internal | http://10.1.36.28:9292 |
| 8584b619de5d42259ad50e48b50ae6ae | RegionOne | keystone | identity | True | internal | http://10.1.36.28:5000/v3 |
| c7b19ea074d8483da8ee74a784ac579c | RegionOne | glance | image | True | admin | http://10.1.36.28:9292 |
| dc5d3a56208b4453a1d6650bf2c20f68 | RegionOne | keystone | identity | True | admin | http://10.1.36.28:5000/v3 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
2.安装和配置组件
glance的安装:
# yum install -y openstack-glance python-glance python-glanceclient
编辑文件 /etc/glance/glance-api.conf 并完成如下动作:
在 [database] 部分,配置数据库访问:
[database]
...
配置keystone与glance-api.conf的链接:
编辑/etc/glance/glance-api.conf文件 [keystone_authtoken] 和 [paste_deploy] 部分,配置认证服务访问:
[keystone_authtoken]
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 04aea9de5f79
[paste_deploy]
flavor = keystone
注:N版后keystone认证版本升级,注意配置时相应的提高版本配置,否则会出现openstack image list 报http 500的错误,后面的keystone认证版本都要改,但后面不在提示。
下面是报错示范
[root@controller1 ~]# openstack image list
Internal Server Error (HTTP 500)
# 打开copy-on-write功能
[DEFAULT]
show_image_direct_url = True
在 [glance_store] 部分, 变更默认使用的本地文件存储为ceph rbd存储:
[glance_store]
...
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
注:使用默认的本地文件存储配置如下
[glance_store]
enabled_backends = file,http
default_backend = file
filesystem_store_datadir = /var/lib/glance/images/
查看/etc/glance/glance-api.conf是否和下面一样
# grep -v '^#\|^$' /etc/glance/glance-api.conf
[DEFAULT]
debug = True
log_file = /var/log/glance/glance-api.log
use_forwarded_for = true
bind_port = 9292
workers = 5
show_multiple_locations = True
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
[cinder]
[cors]
[database]
connection = mysql+pymysql://glance:04aea9de5f79@10.1.36.28:3306/glance
[file]
[glance.store.http.store]
[glance.store.rbd.store]
[glance.store.sheepdog.store]
[glance.store.swift.store]
[glance.store.vmware_datastore.store]
[glance_store]
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
[image_format]
[keystone_authtoken]
www_authenticate_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 04aea9de5f79
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
注:确保/etc/ceph/ceph.conf和 /etc/ceph/ceph.client.glance.keyring文件存在并有glance访问的权限
[root@controller1 ~]# ls -lh /etc/ceph/
total 16K
-rw-r--r-- 1 glance glance 64 May 12 09:05 ceph.client.cinder.keyring
-rw-r----- 1 glance glance 64 May 12 09:03 ceph.client.glance.keyring
-rw-r--r-- 1 glance glance 1.5K May 12 13:45 ceph.conf
并且ceph.conf文件中有注明client.glance的密钥文件存放位置
# cat /etc/ceph/ceph.conf
[global]
fsid = 3948cba4-b0fa-4e61-84f5-3cec08dd5859
mon_initial_members = ceph-host-01, ceph-host-02, ceph-host-03
mon_host = 10.1.36.11,10.1.36.12,10.1.36.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon clock drift allowed = 2
mon clock drift warn backoff = 30
public_network = 10.1.36.0/24
cluster_network = 192.168.36.0/24
max_open_files = 131072
mon_pg_warn_max_per_osd = 1000
osd pool default pg num = 256
osd pool default pgp num = 256
osd pool default size = 2
osd pool default min size = 1
mon_osd_full_ratio = .90
mon_osd_nearfull_ratio = .80
osd_deep_scrub_randomize_ratio = 0.01
[mon]
mon_allow_pool_delete = true
mon_osd_down_out_interval = 600
mon_osd_min_down_reporters = 3
[mgr]
mgr modules = dashboard
[osd]
osd_journal_size = 20480
osd_max_write_size = 1024
osd mkfs type = xfs
osd_recovery_op_priority = 1
osd_recovery_max_active = 1
osd_recovery_max_single_start = 1
osd_recovery_threads = 1
osd_recovery_max_chunk = 1048576
osd_max_backfills = 1
osd_scrub_begin_hour = 22
osd_scrub_end_hour = 7
osd_recovery_sleep = 0
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true
rbd_concurrent_management_ops = 10
rbd_cache_size = 67108864
rbd_cache_max_dirty = 50331648
rbd_cache_target_dirty = 33554432
rbd_cache_max_dirty_age = 2
rbd_default_format = 2
[client.glance]
keyring = /etc/ceph/ceph.client.glance.keyring
[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring
同步数据库:
# su -s /bin/sh -c "glance-manage db_sync" glance
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1336: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
expire_on_commit=expire_on_commit, _conf=conf)
INFO [alembic.runtime.migration] Context impl MySQLImpl.
FO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial
INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table
INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> stein_expand01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: stein_expand01, current revision(s): stein_expand01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database migration is up to date. No migration needed.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> stein_contract01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: stein_contract01, current revision(s): stein_contract01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.
检查数据库是否同步:
[root@controller1 ~]# mysql -uglance -p04aea9de5f79 -e "use glance;show tables;"
+----------------------------------+
| Tables_in_glance |
+----------------------------------+
| alembic_version |
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| metadef_namespace_resource_types |
| metadef_namespaces |
| metadef_objects |
| metadef_properties |
| metadef_resource_types |
| metadef_tags |
| migrate_version |
| task_info |
| tasks |
+----------------------------------+
-------------------------------------------------------------------------------------------
启动glance服务并设置开机启动:
systemctl enable openstack-glance-api
systemctl start openstack-glance-api
-------------------------------------------------------------------------------------------
监听端口: api:9292
[root@controller1 ~]# netstat -tnlp|grep python
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 15712/python2
-------------------------------------------------------------------------------------------
[root@controller1 ~]# glance image-list
+----+------+
| ID | Name |
+----+------+
+----+------+
如果执行glance image-list命令出现以上画面则表示glance安装成功了。
注:如果出现如下报错示范,一般是/etc/glance/glance-api.conf或者/etc/glance/glance-registry.conf里 www_authenticate_uri和 www_authenticate_uri的配置有错误,在
[root@controller1 ~]# openstack image list
Internal Server Error (HTTP 500)
拓展:
glance image-list 和openstack image list命令的效果是一样的
---------------------------------------------------------------------------------------------------
glance验证操作
下载源镜像:
openstack image create "cirros3.5" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+----------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2020-05-09T08:23:30Z |
| disk_format | qcow2 |
| file | /v2/images/67a2878e-1faf-415b-afc2-c48741dc9a24/file |
| id | 67a2878e-1faf-415b-afc2-c48741dc9a24 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros3.5 |
| owner | 445adc5d8a7e49a693530192fb8fb4c2 |
| properties | direct_url='rbd://b071b40f-44e4-4a25-bdb3-8b654e4a429a/images/67a2878e-1faf-415b-afc2-c48741dc9a24/snap' |
| protected | False |
| schema | /v2/schemas/image |
| size | 13267968 |
| status | active |
| tags | |
| updated_at | 2020-05-09T08:23:33Z |
| virtual_size | None |
| visibility | public |
+------------------+----------------------------------------------------------------------------------------------------------+
------------------------------------------------------------------------------------------------
查看镜像:
[root@controller1 ~]# openstack image list
+--------------------------------------+-----------+--------+
| ID | Name | Status |
+--------------------------------------+-----------+--------+
| 67a2878e-1faf-415b-afc2-c48741dc9a24 | cirros3.5 | active |
+--------------------------------------+-----------+--------+
[root@controller1 ~]# glance image-list
+--------------------------------------+-----------+
| ID | Name |
+--------------------------------------+-----------+
| 67a2878e-1faf-415b-afc2-c48741dc9a24 | cirros3.5 |
+--------------------------------------+-----------+
镜像存放位置:
[root@controller1 ~]# rbd ls images
67a2878e-1faf-415b-afc2-c48741dc9a24
注:关于glance服务的高可用,我们可以把controller1这个控制节点下的/etc/glance目录直接打包压缩,拷贝到其他控制节点上,解压后直接启动openstack-glance-api和openstack-glance-registry服务,haproxy节点配置好,就可以做到glance服务的高可用。其他服务也是这么操作的,有些配置文件关于主机IP的地方注意修改下就好。
------------------------------------------------------------------------------------------------
第五章 Openstack计算服务Nova
Nova控制节点(openstack虚拟机必备组件:keystone,glance,nova,neutron)
API:负责接收和响应外部请求,支持openstack API,EC2API
Cert:负责身份认证
Scheduler:用于云主机调度
Conductor:计算节点访问数据的中间件
Consoleleauth:用于控制台的授权验证
Novncproxy:VNC代理
Nova API组件实现了RESTful API功能,是外部访问Nova的唯一途径。
接收外部请求并通过Message Queue将请求发送给其他的服务组件,同时也兼容EC2 API,所以也可以用EC2的管理
工具对nova进行日常管理。
Nova Scheduler模块在openstack中的作用就是决策虚拟机创建在哪个主机(计算节点)上。
决策一个虚机应该调度到某物理节点,需要分两个步骤:
过滤(Fliter) 计算权值(Weight)
Fliter Scheduler首先得到未经过滤的主机列表,然后根据过滤属性,选择符合条件的计算节点主机。
经过主机过滤后,需要对主机进行权值的计算,根据策略选择相应的某一台主机(对于每一个要创建的虚拟机而言)
1.先决条件
[root@controller1 ~]# source admin-openstack.sh
nova服务创建:
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://10.1.36.28:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://10.1.36.28:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://10.1.36.28:8774/v2.1
2.Nova控制节点部署 controller1
首先我们需要先在控制节点部署除nova-compute之外的其它必备的服务。
安装nova控制节点:
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
在``[DEFAULT]``部分,只启用计算和元数据API:
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
在``[api_database]``和``[database]``部分,配置数据库的连接:
[api_database]
...
[database]
...
在 “[DEFAULT]” 部分,配置 “RabbitMQ” 消息队列访问:
[DEFAULT]
...
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
注解
在 [keystone_authtoken] 中注释或者删除其他选项。
注:如果不配置my_ip选项,那么后面配置中有$my_ip的部分请变更为控制器节点的管理接口ip
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
注解
默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,你必须使用``nova.virt.firewall.NoopFirewallDriver``防火墙服务来禁用掉计算服务内置的防火墙服务
在``[vnc]``部分,配置VNC代理使用控制节点的管理接口IP地址 :
[vnc]
...
enabled = true
server_listen=0.0.0.0
server_proxyclient_address=10.1.36.28
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
...
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
在该[placement]部分中,配置Placement API:
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = 04aea9de5f79
配置nova.conf文件
# grep -v "^#\|^$" /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
[api]
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://nova:04aea9de5f79@10.1.36.28/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
manager=nova.conductor.manager.ConductorManager
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:04aea9de5f79@10.1.36.28/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://10.1.36.28:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
server_listen=0.0.0.0
server_proxyclient_address=10.1.36.28
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
同步 nova-api数据库:
su -s /bin/sh -c "nova-manage api_db sync" nova
注意
忽略此输出中的任何弃用消息。
注册cell0数据库:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1单元格:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
填充nova数据库:
su -s /bin/sh -c "nova-manage db sync" nova
验证nova cell0和cell1是否正确注册:
# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@10.1.36.28/nova_cell0 | False |
| cell1 | 7244de69-18a7-4213-9bcc-f04d3d329e8e | rabbit://openstack:****@10.1.36.28 | mysql+pymysql://nova:****@10.1.36.28/nova | False |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
查看nova和nova_api,nova_cell0数据库是否写入成功
# mysql -unova -p'04aea9de5f79' -e "use nova_api;show tables;"
+------------------------------+
| Tables_in_nova_api |
+------------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
.
.
.
| resource_provider_traits |
| resource_providers |
| traits |
| users |
+------------------------------+
# mysql -unova -p'04aea9de5f79' -e "use nova;show tables;"
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
.
.
.
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
# mysql -unova -p'04aea9de5f79' -e "use nova_cell0;show tables;"
+--------------------------------------------+
| Tables_in_nova_cell0 |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
.
.
.
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
完成安装
启动Compute服务并将其配置为在系统引导时启动:
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
验证服务是否起来
# openstack compute service list
+----+------------------+-------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-------------+----------+---------+-------+----------------------------+
| 3 | nova-consoleauth | controller1 | internal | enabled | up | 2020-05-16T01:39:06.000000 |
| 6 | nova-scheduler | controller1 | internal | enabled | up | 2020-05-16T01:39:12.000000 |
| 18 | nova-conductor | controller1 | internal | enabled | up | 2020-05-16T01:39:13.000000 |
+----+------------------+-------------+----------+---------+-------+----------------------------+
安装和配置Placement
在服务目录中创建Placement API条目
openstack service create --name placement --description "Placement API" placement
创建Placement API服务端点
openstack endpoint create --region RegionOne placement public http://10.1.36.28:8778
openstack endpoint create --region RegionOne placement internal http://10.1.36.28:8778
openstack endpoint create --region RegionOne placement admin http://10.1.36.28:8778
安装软件包
yum install -y openstack-placement-api
编辑/etc/placement/placement.conf文件并完成以下操作:
在该[placement_database]部分中,配置数据库访问:
[placement_database]
# ...
替换PLACEMENT_DBPASS为您为展示位置数据库选择的密码。
在[api]和[keystone_authtoken]部分中,配置身份服务访问:
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 04aea9de5f79
替换PLACEMENT_PASS为您placement在身份服务中为用户选择的密码 。
注意
注释掉或删除此[keystone_authtoken] 部分中的任何其他选项。
注意
的价值user_name,password,project_domain_name并 user_domain_name需要在你的梯形配置同步。
配置placement.conf文件
# grep -v "^#\|^$" /etc/placement/placement.conf
[DEFAULT]
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 04aea9de5f79
[placement]
[placement_database]
connection = mysql+pymysql://placement:04aea9de5f79@10.1.36.28/placement
填充placement数据库:
# su -s /bin/sh -c "placement-manage db sync" placement
查看数据库是否导入成功
[root@controller1 ~]# mysql -e 'use placement;show tables;'
+------------------------------+
| Tables_in_placement |
+------------------------------+
| alembic_version |
| allocations |
| consumers |
| inventories |
| placement_aggregates |
| projects |
| resource_classes |
| resource_provider_aggregates |
| resource_provider_traits |
| resource_providers |
| traits |
| users |
+------------------------------+
由于
打包错误,您必须通过将以下配置添加到以下内容来启用对Placement API的访问/etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
00-placement-api.conf 的配置示范
Listen 0.0.0.0:8778
<VirtualHost *:8778>
WSGIProcessGroup placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
WSGIScriptAlias / /usr/bin/placement-api
<IfVersion >= 2.4>
ErrorLogFormat "%M"
</IfVersion>
ErrorLog /var/log/placement/placement-api.log
#SSLEngine On
#SSLCertificateFile ...
#SSLCertificateKeyFile ...
</VirtualHost>
Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
重启 httpd服务:
systemctl restart httpd memcached
校验安装
执行状态检查命令
[root@controller1 ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
# nova-status upgrade check
+--------------------------------------------------------------------+
| Upgrade Check Results |
+--------------------------------------------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: No host mappings or compute nodes were found. Remember to |
| run command 'nova-manage cell_v2 discover_hosts' when new |
| compute hosts are deployed. |
+--------------------------------------------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Request Spec Migration |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Console Auths |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
--------------------------------------------------------------------------------------------------------------------------------------------
第六章 Openstack网络服务Neutron
生产环境,假设我们的openstack是公有云,我们一般的linuxbridge结合vlan的模式相对于大量的用户来说是vlan是不够用的,于是我们引进vxlan技术解决云主机内网网络通讯的问题。
我们的物理服务器一般有4个网络网卡,一个是远控卡,一个是管理网卡(物理机之间相互通讯和管理使用),一个用于云主机外网通讯(交换机与其对接是trunk口,云主机通过物理机的vlan与不同外网对接),最后一个是云主机内网通讯使用(交换机与其对接是access口,并且配置好IP好被vxlan调用)。
1.先决条件
注册neutron网络服务:
[root@controller1 ~]# source admin-openstack.sh
openstack service create --name neutron --description "OpenStack Networking" network
2.配置网络选项
Neutron在控制节点部署 controller1
[root@controller1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
Neutron控制节点配置 controller1
编辑/etc/neutron/neutron.conf文件并完成如下操作:
在 [database] 部分,配置数据库访问:
[database]
...
在该[DEFAULT]部分中,启用模块化第2层(ML2)插件并禁用其他插件:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
auth_type = password
注解
在 [keystone_authtoken] 中注释或者删除其他选项。
在``[DEFAULT]``和``[nova]``部分,配置网络服务来通知计算节点的网络拓扑变化:
[DEFAULT]
...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
...
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 04aea9de5f79
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
# grep -v "^#\|^$" /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
auth_strategy = keystone
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:04aea9de5f79@10.1.36.28/neutron
[keystone_authtoken]
www_authenticate_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
auth_type = password
[matchmaker_redis]
[nova]
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 04aea9de5f79
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
quota_network = 200
quota_subnet = 200
quota_port = 5000
quota_driver = neutron.db.quota.driver.DbQuotaDriver
quota_router = 100
quota_floatingip = 1000
quota_security_group = 100
quota_security_group_rule = 1000
[ssl]
配置 Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
编辑 /etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:
在``[ml2]``部分,启用flat和VLAN网络:
[ml2]
...
type_drivers = flat,vlan,gre,vxlan,geneve
在``[ml2]``部分,禁用私有网络:
[ml2]
...
tenant_network_types = vxlan
在``[ml2]``部分,启用Linuxbridge机制:
[ml2]
...
mechanism_drivers = linuxbridge,l2population
警告
在你配置完ML2插件之后,删除可能导致数据库不一致的``type_drivers``项的值。
在``[ml2]`` 部分,启用端口安全扩展驱动:
[ml2]
...
extension_drivers = port_security
在``[ml2_type_flat]``部分,配置公共虚拟网络为flat网络
[ml2_type_flat]
...
flat_networks = external
在 ``[securitygroup]``部分,启用 ipset 增加安全组规则的高效性:
[securitygroup]
...
enable_ipset = true
[root@controller1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = default
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = default:1:4000
[ml2_type_vxlan]
vni_ranges = 1:2000
[securitygroup]
enable_ipset = true
配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作:
在该[linux_bridge]部分中,将提供者虚拟网络映射到提供者物理网络接口:
[linux_bridge]
physical_interface_mappings = default:eth1
在``[vxlan]``部分
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.21
在 ``[securitygroup]``部分,启用安全组并配置 Linuxbridge iptables firewall driver:
[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@controller1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = default:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.21
配置DHCP代理
The DHCP agent provides DHCP services for virtual networks.
编辑/etc/neutron/dhcp_agent.ini文件并完成下面的操作:
在``[DEFAULT]``部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[root@controller1 ~]# grep -v "^#\|^$" /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[ovs]
配置元数据代理
The :term:`metadata agent <Metadata agent>`负责提供配置信息,例如:访问实例的凭证
编辑``/etc/neutron/metadata_agent.ini``文件并完成以下操作:
在``[DEFAULT]`` 部分,配置元数据主机以及共享密码:
[DEFAULT]
...
nova_metadata_ip = 10.1.36.28
metadata_proxy_shared_secret = 04aea9de5f79
# grep -v '^#\|^$' /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = 10.1.36.28
metadata_proxy_shared_secret = 04aea9de5f79
[cache]
配置l3
# grep -v '^#\|^$' /etc/neutron/l3_agent.ini
[DEFAULT]
ovs_use_veth = False
interface_driver = linuxbridge
debug = True
完成安装
网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超链接不存在,使用下面的命令创建它:
[root@controller1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库:
[root@controller1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
注解
数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件。
为计算节点配置网络服务
编辑/etc/nova/nova.conf文件并完成下面的操作:
在``[neutron]`` 部分,配置访问参数,启用元数据代理并设置密码:
[neutron]
...
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
重启计算API 服务:
[root@controller1 ~]# systemctl restart openstack-nova-api.service
当系统启动时,启动 Networking 服务并配置它启动。
对于两种网络选项:
[root@controller1 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@controller1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
对于网络选项2,同样启用layer-3服务并设置其随系统自启动
[root@controller1 ~]# systemctl enable neutron-l3-agent.service
[root@controller1 ~]# systemctl start neutron-l3-agent.service
检验nentron在控制节点是否OK
[root@controller1 ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 36134331-0c29-4eaa-b287-93e69836d419 | DHCP agent | controller1 | nova | :-) | UP | neutron-dhcp-agent |
| 67b10d2b-2438-40e1-8402-70219cd5100c | Metadata agent | controller1 | None | :-) | UP | neutron-metadata-agent |
| 6e40171c-6be3-49a7-93d0-ee54ce831025 | Linux bridge agent | controller1 | None | :-) | UP | neutron-linuxbridge-agent |
| 7fbb4072-6358-4cf6-8b6e-9631bb0c9eac | L3 agent | controller1 | nova | :-) | UP | neutron-l3-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
终极检验示范:
[root@controller1 ~]# openstack extension list --network
----------------------------------------------------------------------------------------------------------------
第七章 Openstack管理服务Horizon
安装软件包:
# yum install openstack-dashboard -y
编辑文件 /etc/openstack-dashboard/local_settings 并完成如下动作:
在 controller 节点上配置仪表盘以使用 OpenStack 服务:
OPENSTACK_HOST = "10.1.36.28"
允许所有主机访问仪表板:
ALLOWED_HOSTS = ['*', ]
配置 memcached 会话存储服务:
#SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '10.1.36.28:11211',
}
}
启用第3版认证API:
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
配置API版本:
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 3,
"image": 2,
"compute": 2,
}
通过仪表盘创建用户时的默认域配置为 default :
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
通过仪表盘创建的用户默认角色配置为 user :
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
如果您选择网络参数1,禁用支持3层网络服务:
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': True,
'enable_ha_router': True,
'enable_lb': True,
'enable_firewall': True,
'enable_vpn': True,
'enable_fip_topology_check': True,
}
可以选择性地配置时区:
TIME_ZONE = "Asia/Shanghai"
最终配置示范:
# grep -v '#\|^$' /etc/openstack-dashboard/local_settings
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
WEBROOT = '/dashboard/'
ALLOWED_HOSTS = ['*', ]
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
"image": 2,
"compute": 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
LOCAL_PATH = '/tmp'
SECRET_KEY='3f508e8a4399dffa3323'
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '10.1.36.21:11211',
},
}
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "10.1.36.21"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_BACKEND = {
'name': 'native',
'can_edit_user': True,
'can_edit_group': True,
'can_edit_project': True,
'can_edit_domain': True,
'can_edit_role': True,
}
LAUNCH_INSTANCE_DEFAULTS = {
'config_drive': False,
'enable_scheduler_hints': True,
'disable_image': False,
'disable_instance_snapshot': False,
'disable_volume': False,
'disable_volume_snapshot': False,
'create_volume': False,
}
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': False,
'can_set_password': True,
'requires_keypair': False,
'enable_quotas': True
}
OPENSTACK_CINDER_FEATURES = {
'enable_backup': True,
}
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': True,
'enable_ha_router': True,
'enable_lb': True,
'enable_firewall': True,
'enable_vpn': True,
'enable_fip_topology_check': True,
}
OPENSTACK_HEAT_STACK = {
'enable_user_pass': True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
"architecture": _("Architecture"),
"kernel_id": _("Kernel ID"),
"ramdisk_id": _("Ramdisk ID"),
"image_state": _("Euca2ools state"),
"project_id": _("Project ID"),
"image_type": _("Image Type"),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
INSTANCE_LOG_LENGTH = 35
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = "Asia/Shanghai"
POLICY_FILES_PATH = '/etc/openstack-dashboard'
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'operation': {
'format': '%(asctime)s %(message)s'
},
},
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'logging.NullHandler',
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
},
'operation': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'operation',
},
},
'loggers': {
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
},
'requests': {
'handlers': ['null'],
'propagate': False,
},
'horizon': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'horizon.operation_log': {
'handlers': ['operation'],
'level': 'INFO',
'propagate': False,
},
'openstack_dashboard': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'cinderclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'keystoneclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'glanceclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'neutronclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'heatclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'swiftclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_auth': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'nose.plugins.manager': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'django': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'iso8601': {
'handlers': ['null'],
'propagate': False,
},
'scss': {
'handlers': ['null'],
'propagate': False,
},
},
}
SECURITY_GROUP_RULES = {
'all_tcp': {
'name': _('All TCP'),
'ip_protocol': 'tcp',
'from_port': '1',
'to_port': '65535',
},
'all_udp': {
'name': _('All UDP'),
'ip_protocol': 'udp',
'from_port': '1',
'to_port': '65535',
},
'all_icmp': {
'name': _('All ICMP'),
'ip_protocol': 'icmp',
'from_port': '-1',
'to_port': '-1',
},
'ssh': {
'name': 'SSH',
'ip_protocol': 'tcp',
'from_port': '22',
'to_port': '22',
},
'smtp': {
'name': 'SMTP',
'ip_protocol': 'tcp',
'from_port': '25',
'to_port': '25',
},
'dns': {
'name': 'DNS',
'ip_protocol': 'tcp',
'from_port': '53',
'to_port': '53',
},
'http': {
'name': 'HTTP',
'ip_protocol': 'tcp',
'from_port': '80',
'to_port': '80',
},
'pop3': {
'name': 'POP3',
'ip_protocol': 'tcp',
'from_port': '110',
'to_port': '110',
},
'imap': {
'name': 'IMAP',
'ip_protocol': 'tcp',
'from_port': '143',
'to_port': '143',
},
'ldap': {
'name': 'LDAP',
'ip_protocol': 'tcp',
'from_port': '389',
'to_port': '389',
},
'https': {
'name': 'HTTPS',
'ip_protocol': 'tcp',
'from_port': '443',
'to_port': '443',
},
'smtps': {
'name': 'SMTPS',
'ip_protocol': 'tcp',
'from_port': '465',
'to_port': '465',
},
'imaps': {
'name': 'IMAPS',
'ip_protocol': 'tcp',
'from_port': '993',
'to_port': '993',
},
'pop3s': {
'name': 'POP3S',
'ip_protocol': 'tcp',
'from_port': '995',
'to_port': '995',
},
'ms_sql': {
'name': 'MS SQL',
'ip_protocol': 'tcp',
'from_port': '1433',
'to_port': '1433',
},
'mysql': {
'name': 'MYSQL',
'ip_protocol': 'tcp',
'from_port': '3306',
'to_port': '3306',
},
'rdp': {
'name': 'RDP',
'ip_protocol': 'tcp',
'from_port': '3389',
'to_port': '3389',
},
}
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
'LAUNCH_INSTANCE_DEFAULTS',
'OPENSTACK_IMAGE_FORMATS',
'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN']
ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []}
完成安装
默认安装的httpd运行模式是prefork
[root@node1 ~]# httpd -V
Server version: Apache/2.4.6 (CentOS)
Server built: Jul 29 2019 17:18:49
Server's Module Magic Number: 20120211:24
Server loaded: APR 1.4.8, APR-UTIL 1.5.2
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture: 64-bit
Server MPM: prefork
threaded: no
forked: yes (variable process count)
httpd2.4切换成event模型,需要修改配置文件/etc/httpd/conf.modules.d/00-mpm.conf,内容如下:
LoadModule mpm_event_module modules/mod_mpm_event.so
[root@node1 ~]# httpd -V
Server version: Apache/2.4.6 (CentOS)
Server built: Jul 29 2019 17:18:49
Server's Module Magic Number: 20120211:24
Server loaded: APR 1.4.8, APR-UTIL 1.5.2
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture: 64-bit
Server MPM: event
threaded: yes (fixed thread count)
forked: yes (variable process count)
重启web服务器以及会话存储服务:
[root@controller1 ~]# systemctl restart httpd.service memcached.service
验证仪表盘的操作。
验证使用 admin 或者``demo``用户凭证和``default``域凭证。
计算节点的相关服务的安装与配置
Nova计算节点部署 compute1
nova-compute一般运行在计算节点上,通过message queue接收并管理VM的生命周期
nova-compute通过libvirt管理KVM,通过XenAPI管理Xen
基础软件包安装
基础软件包需要在所有的OpenStack节点上进行安装,包括控制节点和计算节点。
提前安装好常用软件
yum install -y vim net-tools wget lrzsz tree screen lsof tcpdump nmap bridge-utils
1.安装EPEL仓库
2.安装OpenStack仓库
OpenStack stein,目前CentOS7.6版本只支持stein、rocky、stein、train四个版本,我们选择次新版的stein版本
从stein版本后版本都是直接centos基础源extras里了,可以直接yum
# yum search openstack | grep release
centos-release-openstack-queens.noarch : OpenStack from the CentOS Cloud SIG
centos-release-openstack-rocky.noarch : OpenStack from the CentOS Cloud SIG repo
centos-release-openstack-stein.noarch : OpenStack from the CentOS Cloud SIG repo
centos-release-openstack-train.noarch : OpenStack from the CentOS Cloud SIG repo
# yum install centos-release-openstack-stein -y
3.安装OpenStack客户端
yum install -y python-openstackclient
4.安装openstack SELinux管理包
yum install -y openstack-selinux
5.时间同步
安装网络守时服务
Openstack节点之间必须时间同步,不然可能会导致创建云主机不成功。
# yum install chrony -y
# vim /etc/chrony.conf #修改NTP配置
# systemctl enable chronyd.service#设置NTP服务开机启动
# systemctl start chronyd.service#启动NTP对时服务
# chronyc sources#验证NTP对时服务
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? ControllerNode 0 6 0 - +0ns[ +0ns] +/- 0ns
设置时区
timedatectl set-timezone Asia/Shanghai
部署和配置nova-compute
[root@compute1 ~]# yum install -y openstack-nova-compute
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
在该[DEFAULT]部分中,仅启用计算和元数据API:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
在[DEFAULT]部分,配置``RabbitMQ``消息队列的连接:
[DEFAULT]
...
注: Openstack N版以后不在支持rpc_backend设置
在 [api] 和 [keystone_authtoken] 部分,配置认证服务访问:
[api]
...
auth_strategy = keystone
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 04aea9de5f79
注解
在 [keystone_authtoken] 中注释或者删除其他选项。
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
注解
缺省情况下,Compute 使用内置的防火墙服务。由于 Networking 包含了防火墙服务,所以你必须通过使用 nova.virt.firewall.NoopFirewallDriver 来去除 Compute 内置的防火墙服务。
在``[vnc]``部分,启用并配置远程控制台访问:
[vnc]
...
enabled = true
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.1.36.24
服务器组件监听所有的 IP 地址,而代理组件仅仅监听计算节点管理网络接口的 IP 地址。基本的 URL 指示您可以使用 web 浏览器访问位于该计算节点上实例的远程控制台的位置。
注解
如果你运行浏览器的主机无法解析``controller`` 主机名,你可以将 ``controller``替换为你控制节点管理网络的IP地址。
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
...
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
在该[placement]部分中,配置Placement API:
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = 04aea9de5f79
[root@compute1 ~]# grep -v '^#\|^$' /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.1.36.28:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
virt_type = kvm
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen=0.0.0.0
server_proxyclient_address= 10.1.36.24
novncproxy_base_url = http://10.1.36.28:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
完成安装
确定您的计算节点是否支持虚拟机的硬件加速。
$ egrep -c '(vmx|svm)' /proc/cpuinfo
如果这个命令返回了 one or greater 的值,那么你的计算节点支持硬件加速且不需要额外的配置。
如果这个命令返回了 zero 值,那么你的计算节点不支持硬件加速。你必须配置 libvirt 来使用 QEMU 去代替 KVM
在 /etc/nova/nova.conf 文件的 [libvirt] 区域做出如下的编辑:
[libvirt]
...
virt_type = qemu
启动计算服务及其依赖,并将其配置为随系统自动启动:
[root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service
ceph和nova的结合
安装前我们配置下yum源,这里使用的是较新的nautilus版本
[root@compute1 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
安装ceph-common
[root@compute1 ~]# yum install ceph-common -y
[root@compute1 ~]# rpm -qa | grep ceph-common
ceph-common-14.2.9-0.el7.x86_64
注:其他从ceph集群中获取到ceph.conf和ceph.client.cinder.keyring文件并放到nova-compute所在节点的/etc/ceph/目录下
[root@compute1 ceph]# ls -lh /etc/ceph/
total 8.0K
-rwxrwxrwx 1 nova nova 64 May 22 09:21 ceph.client.cinder.keyring
-rwxrwxrwx 1 nova nova 1.5K May 22 09:44 ceph.conf
由于不知道具体的权限问题,这里直接给了最大权限chmod -R 777 /etc/ceph && chown -R nova.nova /etc/ceph/
/etc/ceph目录和其下文件权限不够会导致报错,报错内容:ERROR
nova.compute.manager PermissionDeniedError: [errno 13] error calling conf_read_file
推送client.cinder.key给计算节点compute1
[root@ceph-host-01 ceph-cluster]# ceph auth get-key client.cinder | ssh compute1 tee client.cinder.key
libvirt秘钥
nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群;
# 在ceph的admin节点向计算节点推送client.cinder秘钥文件,生成的文件是临时性的,将秘钥添加到libvirt后可删除
# 在计算节点将秘钥加入libvirt,以node3节点为例;
# 首先生成1个uuid,全部计算和cinder节点可共用此uuid(其他节点不用操作此步);
# uuid后续配置nova.conf文件时也会用到,请保持一致
[root@compute1 ~]# uuidgen
2b706e33-609e-4542-9cc5-1a01703a292f
# 在libvirt上添加秘钥
[root@compute1 ~]# vim secret.xml
<secret ephemeral='no' private='no'>
<uuid>2b706e33-609e-4542-9cc5-1a01703a292f</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
[root@compute1 ~]# virsh secret-define --file secret.xml
Secret 2b706e33-609e-4542-9cc5-1a01703a292f created
[root@compute1 ~]# virsh secret-set-value --secret 2b706e33-609e-4542-9cc5-1a01703a292f --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
注: --base64 后面的是/etc/ceph/ceph.client.nova.keyring里的key
[root@compute1 ~]# cat client.cinder.key
AQC37MRe3U6XHhAA4AUWhAlyh8bUqrMny1X8bw==
配置ceph.conf
# 如果需要从ceph rbd中启动虚拟机,必须将ceph配置为nova的临时后端;
# 推荐在计算节点的配置文件中启用rbd cache功能;
# 为了便于故障排查,配置admin socket参数,这样每个使用ceph rbd的虚拟机都有1个socket将有利于虚拟机性能分析与故障解决;
# 相关配置只涉及全部计算节点ceph.conf文件的[client]与[client.cinder]字段,以compute01节点为例
[root@compute1 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 3948cba4-b0fa-4e61-84f5-3cec08dd5859
mon_initial_members = ceph-host-01, ceph-host-02, ceph-host-03
mon_host = 10.1.36.11,10.1.36.12,10.1.36.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon clock drift allowed = 2
mon clock drift warn backoff = 30
public_network = 10.1.36.0/16
cluster_network = 192.168.36.0/24
max_open_files = 131072
mon_pg_warn_max_per_osd = 1000
osd pool default pg num = 256
osd pool default pgp num = 256
osd pool default size = 2
osd pool default min size = 1
mon_osd_full_ratio = .90
mon_osd_nearfull_ratio = .80
osd_deep_scrub_randomize_ratio = 0.01
[mon]
mon_allow_pool_delete = true
mon_osd_down_out_interval = 600
mon_osd_min_down_reporters = 3
[mgr]
mgr modules = dashboard
[osd]
osd_journal_size = 20480
osd_max_write_size = 1024
osd mkfs type = xfs
osd_recovery_op_priority = 1
osd_recovery_max_active = 1
osd_recovery_max_single_start = 1
osd_recovery_threads = 1
osd_recovery_max_chunk = 1048576
osd_max_backfills = 1
osd_scrub_begin_hour = 22
osd_scrub_end_hour = 7
osd_recovery_sleep = 0
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true
rbd_concurrent_management_ops = 10
rbd_cache_size = 67108864
rbd_cache_max_dirty = 50331648
rbd_cache_target_dirty = 33554432
rbd_cache_max_dirty_age = 2
rbd_default_format = 2
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring
# 创建ceph.conf文件中指定的socker与log相关的目录,并更改属主
[root@compute1 ~]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[root@compute1 ~]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/
注:生产环境发现/var/run/ceph/guests目录老是会在服务器重启后消失,并导致计算节点不可用(无法创建和删除云主机),所以我在下方写了一个定时检测并创建/var/run/ceph/guests/目录的任务
echo '*/3 * * * * root if [ ! -d /var/run/ceph/guests/ ] ;then mkdir -pv /var/run/ceph/guests/ /var/log/qemu/ && chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/ && systemctl restart libvirtd.service openstack-nova-compute.service ;fi' >>/etc/crontab
# 在全部计算节点配置nova后端使用ceph集群的vms池
修改/etc/nova/nova.conf文件添加以下部分
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
# uuid前后一致
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"# 禁用文件注入
inject_password = false
inject_key = false
inject_partition = -2
# 虚拟机临时root磁盘discard功能,”unmap”参数在scsi接口类型磁盘释放后可立即释放空间
hw_disk_discard = unmap
# 原有配置
virt_type=kvm
[root@compute1 ~]# cat /etc/nova/nova.conf
[DEFAULT]
debug = True
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
use_neutron=True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver=libvirt.LibvirtDriver
allow_resize_to_same_host = true
vif_plugging_is_fatal = False
vif_plugging_timeout = 0
live_migration_retry_count = 30
[api]
auth_strategy = keystone
use_forwarded_for = true
[api_database]
[barbican]
[cache]
[cells]
[cinder]
catalog_info = volumev3:cinderv3:internalURL
os_region_name = RegionOne
[compute]
[conductor]
workers = 5
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.1.36.28:9292
num_retries = 3
debug = True
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers = 10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
disk_cachemodes = "network=writeback"
hw_disk_discard = unmap
virt_type = kvm
[metrics]
[mks]
[neutron]
url = http://10.1.36.28:9696
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
compute = auto
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen=0.0.0.0
server_proxyclient_address= 10.1.36.25
novncproxy_base_url = http://10.1.36.28:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
重启计算服务及其依赖
[root@compute1 ~]# systemctl restart libvirtd.service openstack-nova-compute.service
注:重启nova服务后最好查看服务启动是否正常,如果openstack-nova-compute服务启动异常可以通过查看/var/log/nova/nova-compute.log日志排查
systemctl status libvirtd.service openstack-nova-compute.service
配置live-migration
修改/etc/libvirt/libvirtd.conf
# 在全部计算节点操作,以compute01节点为例;
# 以下给出libvirtd.conf文件的修改处所在的行num
[root@compute1 ~]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf
# 取消以下三行的注释
22:listen_tls = 0
33:listen_tcp = 1
45:tcp_port = "16509"# 取消注释,并修改监听端口
55:listen_addr = "0.0.0.0"# 取消注释,同时取消认证
158:auth_tcp = "none"
修改/etc/sysconfig/libvirtd
# 在全部计算节点操作,以compute01节点为例;
# 以下给出libvirtd文件的修改处所在的行num
[root@node3 ~]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
# 取消注释9:LIBVIRTD_ARGS="--listen"
设置iptables
# live-migration时,源计算节点主动连接目的计算节点tcp16509端口,可以使用”virsh -c qemu+tcp://{node_ip or node_name}/system”连接目的计算节点测试;
# 迁移前后,在源目计算节点上的被迁移instance使用tcp49152~49161端口做临时通信;
# 因虚拟机已经启用iptables相关规则,此时切忌随意重启iptables服务,尽量使用插入的方式添加规则;
# 同时以修改配置文件的方式写入相关规则,切忌使用”iptables saved”命令;
# 在全部计算节点操作,以compute01节点为例
[root@compute1 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 16509 -j ACCEPT
[root@compute1 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 49152:49161 -j ACCEPT
重启服务
# libvirtd与nova-compute服务都需要重启
[root@compute1 ~]# systemctl restart libvirtd.service openstack-nova-compute.service
# 查看服务
[root@compute1 ~]# netstat -tunlp | grep 16509
tcp 0 0 10.1.36.24:16509 0.0.0.0:* LISTEN 13107/libvirtd
验证是否成功:
[root@controller1 ~]# source admin-openstack.sh
[root@controller1 ~]# openstack compute service list --service nova-compute
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 9 | nova-compute | compute1 | nova | enabled | up | 2019-02-18T07:16:34.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
Discover compute hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 7244de69-18a7-4213-9bcc-f04d3d329e8e
Found 0 unmapped computes in cell: 7244de69-18a7-4213-9bcc-f04d3d329e8e
Note
When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in /etc/nova/nova.conf:
[scheduler]
discover_hosts_in_cells_interval = 300
或者使用下面的命令做验证
[root@controller1 ~]# openstack host list
+------------+-------------+----------+
| Host Name | Service | Zone |
+------------+-------------+----------+
| controller1 | consoleauth | internal |
| controller1 | scheduler | internal |
| controller1 | conductor | internal |
| compute1 | compute | nova |
+------------+-------------+----------+
[root@controller1 ~]# nova service-list
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| 4790ca20-37c3-4fbf-92d1-72a7b584f6f6 | nova-consoleauth | controller1 | internal | enabled | up | 2019-02-18T07:19:10.000000 | - | False |
| 69a69d43-98c3-436e-866b-03d7944d4186 | nova-scheduler | controller1 | internal | enabled | up | 2019-02-18T07:19:10.000000 | - | False |
| 14bb7cc2-0e80-4ef5-9f28-0775a69d7943 | nova-conductor | controller1 | internal | enabled | up | 2019-02-18T07:19:09.000000 | - | False |
| b20775d6-213e-403d-bfc5-2a3c3f6438e1 | nova-compute | compute1 | nova | enabled | up | 2019-02-18T07:19:14.000000 | - | False |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
如果出现此四个服务则代表nova创建成功了
验证nova与glance的连接,如下说明成功
[root@controller1 ~]# openstack image list
+--------------------------------------+-----------------+--------+
| ID | Name | Status |
+--------------------------------------+-----------------+--------+
| 9560cd59-868a-43ec-8231-351c09bdfe5a | cirros3.4 | active |
+--------------------------------------+-----------------+--------+
[root@controller1 ~]# openstack image show d464af77-9588-43e7-a3d4-3f5f26000030
+------------------+--------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2020-05-13T05:39:17Z |
| disk_format | qcow2 |
| file | /v2/images/9560cd59-868a-43ec-8231-351c09bdfe5a/file |
| id | 9560cd59-868a-43ec-8231-351c09bdfe5a |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros3.4 |
| owner | f004bf0d5c874f2c978e441bddfa2724 |
| properties | locations='[{u'url': u'rbd://3948cba4-b0fa-4e61-84f5-3cec08dd5859/images/9560cd59-868a- |
| | 43ec-8231-351c09bdfe5a/snap', u'metadata': {}}]', os_hash_algo='sha512', os_hash_value='1b |
| | 03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f739d1b5eb4c4c |
| | a24d38fdc6f4f9d8247a2bc64db25d6bbdb2', os_hidden='False' |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2020-05-13T05:39:21Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------------------------------------------------------------+
注:由于到N版openstack时,nova image-list命令已经不支持了(变成glance image-list 或openstack image list),所以只能用上面的命令了
N版后官方推荐的验证办法:
# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller1 | internal | enabled | up | 2019-02-18T07:21:30.000000 |
| 2 | nova-scheduler | controller1 | internal | enabled | up | 2019-02-18T07:21:40.000000 |
| 3 | nova-conductor | controller1 | internal | enabled | up | 2019-02-18T07:21:40.000000 |
| 9 | nova-compute | compute1 | nova | enabled | up | 2019-02-18T07:21:34.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
验证nova与keystone的连接,如下说明成功
# openstack catalog list
# nova-status upgrade check
+---------------------------------------------------------------------+
| Upgrade Check Results |
+---------------------------------------------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------------------------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------------------------------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------------------------------------------------+
扩展:计算节点间的云主机迁移
迁移前一定要保证node3和node4之间可以ssh无密钥访问(计算节点间无密钥访问是云主机能迁移成功的关键),简单的实现示范如下
以ceph-host-04和ceph-host-02为例,其实过程就是在一台主机(ceph-host-04)上使用ssh-keygen生成密钥,再把/root/.ssh/id_rsa和/root/.ssh/id_rsa.pub以及/root/.ssh/id_rsa.pub内容复制到.ssh/authorized_keys,再把这3个文件(/root/.ssh/id_rsa,/root/.ssh/id_rsa.pub,.ssh/authorized_keys)文件拷贝给其他主机(包括ceph-host-04和ceph-host-02),这样可以使N台主机之间都能相互无密钥访问
[root@ceph-host-04 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:MGkIRd0B3Juv6+7OlNOknVGWGKXOulP4b/ddw+e+RDg root@ceph-host-04
The key's randomart image is:
+---[RSA 2048]----+
| .ooo.+..... |
| . .o.o + . |
| . = oo + |
| . ooo o . |
| So= E . |
| .Boo + |
| *++ +o|
| ooo. . o.=|
| =Oo o.. +*|
+----[SHA256]-----+
[root@ceph-host-04 ~]# ssh-copy-id ceph-host-04
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph-host-04 (10.30.1.224)' can't be established.
ECDSA key fingerprint is SHA256:qjCvy9Q/qRV2HIT0bt6ev//3rOGVntxAPQRDZ4aXfEE.
ECDSA key fingerprint is MD5:99:db:b6:3d:83:0e:c2:56:25:47:f6:1b:d7:bd:f0:ce.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-host-04's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ceph-host-04'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph-host-04 ~]# ssh-copy-id ceph-host-02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-host-02's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ceph-host-02'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph-host-04 ~]# scp .ssh/id_rsa root@ceph-host-02:/root/.ssh/
id_rsa
[root@ceph-host-04 ~]# ssh ceph-host-02 w
01:23:10 up 5:20, 1 user, load average: 0.12, 0.18, 0.36
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 desktop-l37krfr. 23:27 1:58 0.14s 0.14s -bash
[root@ceph-host-02 ~]# ssh ceph-host-04 w
01:25:01 up 5:22, 1 user, load average: 0.00, 0.01, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 desktop-l37krfr. 22:04 5.00s 0.26s 0.26s -bash
注:其实只要保障所有主机的/root/.ssh/id_rsa以及/root/.ssh/authorized_keys内容相同就可以了,/root/.ssh/id_rsa.pub反正内容是和/root/.ssh/authorized_keys一样
生产应用:openstack的计算节点root用户无密钥登陆(云主机在计算节点之间的迁移),参照简单的实现方式,新增的计算节点只要拷贝已经生成/root/.ssh/id_rsa文件和把/root/.ssh/id_rsa.pub内容复制追加到/root/.ssh/authorized_keys就可以实现计算节点间的无密钥登陆了。
Neutron计算节点配置 compute1
Neutron在计算节点中的部署 compute1
[root@compute1 ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset
安装用于监控数据包方面的conntrack-tools软件(可选)
[root@compute1 ~]# yum install -y conntrack-tools
neutron计算节点:(将neutron的配置文件拷贝到计算节点)
编辑/etc/neutron/neutron.conf文件并完成以下操作:
在该[database]部分中,注释掉任何connection选项,因为计算节点不直接访问数据库。
在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
[DEFAULT]
...
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
# grep -v '^#\|^$' /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
[cors]
[database]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
配置网络选项
选择与您之前在控制节点上选择的相同的网络选项。之后,回到这里并进行下一步:为计算节点配置网络服务。
配置Linux网桥代理
Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组。
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
在本[linux_bridge]节中,将提供者虚拟网络映射到提供者物理网络接口:
[linux_bridge]
physical_interface_mappings = default:eth1
在该[vxlan]部分中,启动VXLAN覆盖网络:
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.24
在本[securitygroup]节中,启用安全组并配置Linux网桥iptables防火墙驱动程序:
[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@compute1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = default:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.24
为计算节点配置网络服务
编辑/etc/nova/nova.conf文件并完成下面的操作:
在``[neutron]`` 部分,配置访问参数,启用元数据代理并设置密码:
[neutron]
...
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
[root@compute1 ~]# grep -v "^#\|^$" /etc/nova/nova.conf
[DEFAULT]
debug = True
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
use_neutron=True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver=libvirt.LibvirtDriver
allow_resize_to_same_host = true
vif_plugging_is_fatal = False
vif_plugging_timeout = 0
live_migration_retry_count = 30
[api]
auth_strategy = keystone
use_forwarded_for = true
[api_database]
[barbican]
[cache]
[cells]
[cinder]
catalog_info = volumev3:cinderv3:internalURL
os_region_name = RegionOne
[compute]
[conductor]
workers = 5
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.1.36.28:9292
num_retries = 3
debug = True
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers = 10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
disk_cachemodes = "network=writeback"
hw_disk_discard = unmap
virt_type = kvm
[metrics]
[mks]
[neutron]
url = http://10.1.36.28:9696
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
compute = auto
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen=0.0.0.0
server_proxyclient_address= 10.1.36.25
novncproxy_base_url = http://10.1.36.28:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
完成安装
重启计算服务:
[root@compute1 ~]# systemctl restart openstack-nova-compute.service
启动Linuxbridge代理并配置它开机自启动:
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service
检验nentron在计算节点是否OK
[root@controller1 ~]# source admin-openstack.sh
[root@controller1 ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 36134331-0c29-4eaa-b287-93e69836d419 | DHCP agent | controller1 | nova | :-) | UP | neutron-dhcp-agent |
| 67b10d2b-2438-40e1-8402-70219cd5100c | Metadata agent | controller1 | None | :-) | UP | neutron-metadata-agent |
| 6e40171c-6be3-49a7-93d0-ee54ce831025 | Linux bridge agent | controller1 | None | :-) | UP | neutron-linuxbridge-agent |
| 7fbb4072-6358-4cf6-8b6e-9631bb0c9eac | L3 agent | controller1 | nova | :-) | UP | neutron-l3-agent |
| c5fbf4e0-0d72-40b0-bb53-c383883a0d19 | Linux bridge agent | compute1 | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
代表计算节点的Linux bridge agent已成功连接到控制节点。
Openstack块存储服务Cinder
Cinder官方文档:https://docs.openstack.org/cinder
块存储服务(cinder)为实例提供块存储。存储的分配和消耗是由块存储驱动器,或者多后端配置的驱动器决定的。还有很多驱动程序可用:NAS/SAN,NFS,ISCSI,Ceph等。
安装并配置控制节点
数据库和授权在开始已经做过了,这里不再重复
要创建服务证书,完成这些步骤:
创建一个 cinder 用户:
[root@controller1 ~]# source admin-openstack.sh
[root@controller1 ~]# openstack user create --domain default --password=04aea9de5f79 cinder
添加 admin 角色到 cinder 用户上。
[root@controller1 ~]# openstack role add --project service --user cinder admin
创建 cinder 和 cinderv2 服务实体:
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
注解
块设备存储服务要求两个服务实体。
创建块设备存储服务的 API 入口点:
openstack endpoint create --region RegionOne volumev2 public http://10.1.36.28:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://10.1.36.28:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://10.1.36.28:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://10.1.36.28:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://10.1.36.28:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://10.1.36.28:8776/v3/%\(project_id\)s
块设备存储服务每个服务实体都需要端点。
安全并配置组件
安装软件包:
[root@controller1 ~]# yum install -y openstack-cinder
编辑 /etc/cinder/cinder.conf,同时完成如下动作:
在 [database] 部分,配置数据库访问:
[database]
...
在 “[DEFAULT]” 部分,配置 “RabbitMQ” 消息队列访问:
[DEFAULT]
...
用你在 “RabbitMQ” 中为 “openstack” 选择的密码替换 “RABBIT_PASS”。
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 04aea9de5f79
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
初始化块设备服务的数据库:
[root@node1 images]# grep -v "^#\|^$" /etc/cinder/cinder.conf
[DEFAULT]
glance_api_servers = http://10.1.36.28:9292
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
auth_strategy = keystone
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:04aea9de5f79@10.1.36.28/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 04aea9de5f79
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[ceph]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
[root@node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
[root@node1 images]# mysql -ucinder -p04aea9de5f79 -e "use cinder;show tables;"
+----------------------------+
| Tables_in_cinder |
+----------------------------+
| attachment_specs |
| backup_metadata |
| backups |
| cgsnapshots |
| clusters |
| consistencygroups |
| driver_initiator_data |
| encryption |
| group_snapshots |
| group_type_projects |
| group_type_specs |
| group_types |
| group_volume_type_mapping |
| groups |
| image_volume_cache_entries |
| messages |
| migrate_version |
| quality_of_service_specs |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| services |
| snapshot_metadata |
| snapshots |
| transfers |
| volume_admin_metadata |
| volume_attachment |
| volume_glance_metadata |
| volume_metadata |
| volume_type_extra_specs |
| volume_type_projects |
| volume_types |
| volumes |
| workers |
+----------------------------+
配置计算节点以使用块设备存储
编辑文件 /etc/nova/nova.conf 并添加如下到其中:
[cinder]
catalog_info = volumev3:cinderv3:internalURL
os_region_name = RegionOne
完成安装
重启计算API 服务:
[root@controller1 ~]# systemctl restart openstack-nova-api.service
启动块设备存储服务,并将其配置为开机自启:
[root@controller1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
验证块设备存储服务的操作。
[root@controller1 ~]#source admin-openstack.sh
[root@controller1 ~]# openstack volume service list
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1 | nova | enabled | up | 2020-05-16T08:06:17.000000 | - |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
-----------------------------
ceph与cinder的结合
准备工作
安装前我们配置下yum源,这里使用的是较新的nautilus版本
[root@controller1 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
安装ceph-common
[root@controller1 ~]# yum install ceph-common -y
[root@controller1 ~]# rpm -qa | grep ceph-common
ceph-common-14.2.9-0.el7.x86_64
提前在cinder节点的/etc/ceph/目录下放好ceph.conf和ceph.client.cinder.keyring这2个文件
[root@controller1 ~]# ls -lh /etc/ceph/
total 16K
-rw-r--r-- 1 glance glance 64 May 12 09:05 ceph.client.cinder.keyring
-rw-r----- 1 glance glance 64 May 12 09:03 ceph.client.glance.keyring
-rw-r--r-- 1 glance glance 1.5K May 12 13:45 ceph.conf
-rw-r--r-- 1 glance glance 92 Apr 10 01:28 rbdmap
# 后端使用ceph存储[DEFAULT]
enabled_backends = ceph
# 新增[ceph] section;
# 注意红色字体部分前后一致[ceph]
# ceph rbd驱动
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
volume_backend_name = ceph
# 如果配置多后端,则“glance_api_version”必须配置在[DEFAULT] section
[DEFAULT]
glance_api_version = 2
# 变更配置文件,重启服务
整体配置如下:
[root@controller1 ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
debug = True
use_forwarded_for = true
use_stderr = False
osapi_volume_workers = 5
volume_name_template = volume-%s
glance_api_servers = http://10.1.36.28:9292
glance_num_retries = 3
glance_api_version = 2
os_region_name = RegionOne
enabled_backends = ceph
api_paste_config = /etc/cinder/api-paste.ini
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
auth_strategy = keystone
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:04aea9de5f79@10.1.36.28/cinder
max_retries = -1
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 04aea9de5f79
[nova]
interface = internal
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = 04aea9de5f79
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
driver = noop
[oslo_messaging_rabbit]
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
volume_backend_name = ceph
report_discard_supported = True
image_upload_use_cinder_backend = True
[oslo_middleware]
enable_proxy_headers_parsing = True
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
cinder节点
[root@controller1 ~]# systemctl restart openstack-cinder-volume.service
重启controller的cinder服务
[root@controller1 ~]# systemctl restart openstack-cinder-scheduler openstack-cinder-api
注:1.volume_driver = cinder.volume.drivers.rbd.RBDDriver r和/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py是对应的
查看服务状态:
[root@controller1 ~]# cinder service-list
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1 | nova | enabled | up | 2020-05-16T08:06:17.000000 | - |
| cinder-volume | controller1@ceph | nova | enabled | up | 2020-05-16T08:06:18.000000 | - |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
controller建立type
[root@controller1 ~]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| f1df2ecf-44ce-4174-8b8e-69e0177efd9e | ceph | - | True |
+--------------------------------------+------+-------------+-----------+
controller节点配置cinder-type和volume_backend_name联动
[root@controller1 ~]# cinder type-key ceph set volume_backend_name=ceph
#查看type的设置情况
[root@controller1 ~]# cinder extra-specs-list
+--------------------------------------+------+---------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+------+---------------------------------+
| f1df2ecf-44ce-4174-8b8e-69e0177efd9e | ceph | {'volume_backend_name': 'ceph'} |
+--------------------------------------+------+---------------------------------+
重启controller的cinder服务
[root@controller1 ~]# systemctl restart openstack-cinder-scheduler openstack-cinder-api
创建一个卷进行测试
[root@ceph-host-01 ~]# rbd ls volumes
volume-a61b1b60-b55b-493d-ae21-6605ef8cfc35
关于cinder高可用,其实就是三个控制节点都部署了cinder服务而已。
[root@controller1 ~]# cinder service-list
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1 | nova | enabled | up | 2020-05-18T08:14:49.000000 | - |
| cinder-scheduler | controller2 | nova | enabled | up | 2020-05-18T08:14:51.000000 | - |
| cinder-scheduler | controller3 | nova | enabled | up | 2020-05-18T08:14:55.000000 | - |
| cinder-volume | controller1@ceph | nova | enabled | up | 2020-05-18T08:14:55.000000 | - |
| cinder-volume | controller2@ceph | nova | enabled | up | 2020-05-18T08:14:51.000000 | - |
| cinder-volume | controller3@ceph | nova | enabled | up | 2020-05-18T08:14:55.000000 | - |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
从块设备启动
您可以使用Cinder命令行工具从图像创建卷:
cinder create --image-id {id of image} --display-name {name of volume} {size of volume}
您可以使用qemu-img从一种格式转换为另一种格式。例如:
qemu-img convert -f {source-format} -O {output-format} {source-filename} {output-filename}qemu-img convert -f qcow2 -O raw precise-cloudimg.img precise-cloudimg.raw
[root@controller1 ~]# qemu-img convert -f qcow2 -O raw new_centos7.4.qcow2 centos7.4.raw
[root@controller1 ~]# qemu-img info centos7.4.raw
image: centos7.4.raw
file format: raw
virtual size: 30G (32212254720 bytes)
disk size: 1.1G
[root@controller1 ~]# source admin-openstack.sh
[root@controller1 ~]# openstack image create "CentOS 7.4 64位" --file centos7.4.raw --disk-format raw --container-format bare --public
镜像较大,保存在ceph存储集群中还是要花点时间
[root@ceph-host-01 ~]# rbd ls images
73fbe706-fb02-428f-815d-8e97375767a3
9560cd59-868a-43ec-8231-351c09bdfe5a
9e22baf9-71da-49bb-8edf-be0cc09bc8c3
[root@ceph-host-01 ~]# # rbd info images/9e22baf9-71da-49bb-8edf-be0cc09bc8c3
rbd image '9e22baf9-71da-49bb-8edf-be0cc09bc8c3':
size 30 GiB in 3840 objects
order 23 (8 MiB objects)
snapshot_count: 1
id: 2880bbd7308b5
block_name_prefix: rbd_data.2880bbd7308b5
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Thu May 21 11:02:24 2020
access_timestamp: Thu May 21 11:02:24 2020
modify_timestamp: Thu May 21 11:47:10 2020
结尾我们展示下实例的创建:
1.使用卷来启用实例(创建基于ceph存储的bootable存储卷)
挂接 Ceph RBD 卷给虚机的大致交互流程如下:
当Glance和Cinder都使用Ceph块设备时,该映像是写时复制克隆,因此它可以快速创建新卷。在OpenStack仪表板中,可以通过执行以下步骤从该卷启动:
1. 启动一个新实例。
2. 选择与写时复制克隆关联的映像。
3. 选择“从卷启动”。
4. 选择您创建的卷。
查看实例的xml配置文件
[root@compute2 ~]# virsh list --uuid
e76962a0-56cf-4b47-b3e7-9cb589d29e6d
[root@compute2 ~]# virsh dumpxml e76962a0-56cf-4b47-b3e7-9cb589d29e6d
<domain type='kvm' id='12'>
<name>instance-00000083</name>
<uuid>e76962a0-56cf-4b47-b3e7-9cb589d29e6d</uuid>
<metadata>
<nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
<nova:package version="19.1.0-1.el7"/>
<nova:name>centos-vm1</nova:name>
<nova:creationTime>2020-05-21 04:49:54</nova:creationTime>
<nova:flavor name="2c2g">
<nova:memory>2048</nova:memory>
<nova:disk>40</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>2</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid="efe2970c7ab74c67a4aced146cee3fb0">admin</nova:user>
<nova:project uuid="f004bf0d5c874f2c978e441bddfa2724">admin</nova:project>
</nova:owner>
</nova:instance>
</metadata>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<cputune>
<shares>2048</shares>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>RDO</entry>
<entry name='product'>OpenStack Compute</entry>
<entry name='version'>19.1.0-1.el7</entry>
<entry name='serial'>e76962a0-56cf-4b47-b3e7-9cb589d29e6d</entry>
<entry name='uuid'>e76962a0-56cf-4b47-b3e7-9cb589d29e6d</entry>
<entry name='family'>Virtual Machine</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Nehalem-IBRS</model>
<vendor>Intel</vendor>
<topology sockets='2' cores='1' threads='1'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='tsc-deadline'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='rdtscp'/>
</cpu>
<clock offset='utc'>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='2b706e33-609e-4542-9cc5-1a01703a292f'/>
</auth>
<source protocol='rbd' name='volumes/volume-d4c71c06-b118-4a71-9076-074efc211f16'>
<host name='10.1.36.11' port='6789'/>
<host name='10.1.36.12' port='6789'/>
<host name='10.1.36.13' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
<serial>d4c71c06-b118-4a71-9076-074efc211f16</serial>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='bridge'>
<mac address='fa:16:3e:e8:9f:03'/>
<source bridge='brq23348359-07'/>
<target dev='tapc816a9fb-5a'/>
<model type='virtio'/>
<mtu size='1500'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='fa:16:3e:b5:0a:c4'/>
<source bridge='brq4a974777-fd'/>
<target dev='tap9459b9d8-e6'/>
<model type='virtio'/>
<mtu size='1450'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/1'/>
<log file='/var/lib/nova/instances/e76962a0-56cf-4b47-b3e7-9cb589d29e6d/console.log' append='off'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<log file='/var/lib/nova/instances/e76962a0-56cf-4b47-b3e7-9cb589d29e6d/console.log' append='off'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='10'/>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
[root@ceph-host-01 ~]# rbd ls volumes
volume-d4c71c06-b118-4a71-9076-074efc211f16
[root@ceph-host-01 ~]# rbd info volumes/volume-d4c71c06-b118-4a71-9076-074efc211f16
rbd image 'volume-d4c71c06-b118-4a71-9076-074efc211f16':
size 30 GiB in 7680 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 29c8d319f2e27
block_name_prefix: rbd_data.29c8d319f2e27
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Thu May 21 12:34:50 2020
access_timestamp: Thu May 21 13:01:01 2020
modify_timestamp: Thu May 21 13:02:42 2020
parent: images/9e22baf9-71da-49bb-8edf-be0cc09bc8c3@snap
overlap: 30 GiB
2.从ceph rbd启动虚拟机
# --nic:net-id指网络id,非subnet-id;
# 最后“centos-vm1”为instance名
[root@controller1 ~]# nova boot --flavor 2c2g --image 'CentOS 7.4 64位' --availability-zone nova \
--nic net-id=23348359-077f-4133-b484-d9d6195f806a,v4-fixed-ip=192.168.99.122 \
--nic net-id=4a974777-fd29-4678-9e70-9545b4208943,v4-fixed-ip=192.168.100.122 \
--security-group default centos-vm1
+--------------------------------------+--------------------------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | centos-vm1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-jigp2tpl |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | WuJoYkD46mLY |
| config_drive | |
| created | 2020-05-22T06:27:57Z |
| description | - |
| flavor:disk | 40 |
| flavor:ephemeral | 0 |
| flavor:extra_specs | {} |
| flavor:original_name | 2c2g |
| flavor:ram | 2048 |
| flavor:swap | 0 |
| flavor:vcpus | 2 |
| hostId | |
| host_status | |
| id | 92b28257-b5b6-41a4-aebc-9726358d7015 |
| image | CentOS 7.4 64位 (9e22baf9-71da-49bb-8edf-be0cc09bc8c3) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | centos-vm1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| server_groups | [] |
| status | BUILD |
| tags | [] |
| tenant_id | f004bf0d5c874f2c978e441bddfa2724 |
| trusted_image_certificates | - |
| updated | 2020-05-22T06:27:57Z |
| user_id | efe2970c7ab74c67a4aced146cee3fb0 |
+--------------------------------------+--------------------------------------------------------+
# 查询生成的instance
[root@controller1 ~]# openstack server list
+--------------------------------------+------------+--------+-------------------------------------------------+-----------------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------------+--------+-------------------------------------------------+-----------------+--------+
| 92b28257-b5b6-41a4-aebc-9726358d7015 | centos-vm1 | ACTIVE | vlan99=192.168.99.122; vxlan100=192.168.100.122 | CentOS 7.4 64位 | 2c2g |
+--------------------------------------+------------+--------+-------------------------------------------------+-----------------+--------+
# 查看生成的instance的详细信息
[root@controller1 ~]# openstack server show 92b28257-b5b6-41a4-aebc-9726358d7015
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | compute1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute1 |
| OS-EXT-SRV-ATTR:instance_name | instance-00000095 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2020-05-22T06:28:51.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | vlan99=192.168.99.122; vxlan100=192.168.100.122 |
| config_drive | |
| created | 2020-05-22T06:27:57Z |
| flavor | 2c2g (82cc2a11-7b19-4a10-a86e-2408253b70e2) |
| hostId | 49c5f207c741862ee74ae91c1256ad6fe9de334c25195b0897b06150 |
| id | 92b28257-b5b6-41a4-aebc-9726358d7015 |
| image | CentOS 7.4 64位 (9e22baf9-71da-49bb-8edf-be0cc09bc8c3) |
| key_name | None |
| name | centos-vm1 |
| progress | 0 |
| project_id | f004bf0d5c874f2c978e441bddfa2724 |
| properties | |
| security_groups | name='default' |
| | name='default' |
| status | ACTIVE |
| updated | 2020-05-22T06:28:51Z |
| user_id | efe2970c7ab74c67a4aced146cee3fb0 |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+
# 验证是否从ceph rbd启动
[root@ceph-host-01 ~]# rbd ls vms
92b28257-b5b6-41a4-aebc-9726358d7015_disk
3)对rbd启动的虚拟机进行live-migration
# 使用”openstack server show 92b28257-b5b6-41a4-aebc-9726358d7015”得知从rbd启动的instance在迁移前位于compute1节点;
# 或使用”nova hypervisor-servers compute1”进行验证;
[root@controller1 ~]# nova hypervisor-servers compute1
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| 92b28257-b5b6-41a4-aebc-9726358d7015 | instance-00000095 | 83801656-d148-40e7-b6fd-409993f5931d | compute1 |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
[root@controller1 ~]# nova hypervisor-servers compute2
+----+------+---------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+----+------+---------------+---------------------+
+----+------+---------------+---------------------+
[root@controller1 ~]# nova live-migration centos-vm1 compute2
# 迁移过程中可查看状态
[root@controller01 ~]# openstack server list
# 迁移完成后,查看instacn所在节点;
# 或使用”openstack server show 92b28257-b5b6-41a4-aebc-9726358d7015”命令查看”hypervisor_hostname”
[root@controller1 ~]# nova hypervisor-servers compute2
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| 92b28257-b5b6-41a4-aebc-9726358d7015 | instance-00000095 | e433bd1a-13f6-42e9-a176-adb8250ec254 | compute2 |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
[root@controller1 ~]# nova hypervisor-servers compute1
+----+------+---------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+----+------+---------------+---------------------+
+----+------+---------------+---------------------+
查看实例的xml配置文件
[root@compute2 ~]# virsh dumpxml instance-00000095
<domain type='kvm' id='1'>
<name>instance-00000095</name>
<uuid>92b28257-b5b6-41a4-aebc-9726358d7015</uuid>
<metadata>
<nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
<nova:package version="19.1.0-1.el7"/>
<nova:name>centos-vm1</nova:name>
<nova:creationTime>2020-05-22 06:28:49</nova:creationTime>
<nova:flavor name="2c2g">
<nova:memory>2048</nova:memory>
<nova:disk>40</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>2</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid="efe2970c7ab74c67a4aced146cee3fb0">admin</nova:user>
<nova:project uuid="f004bf0d5c874f2c978e441bddfa2724">admin</nova:project>
</nova:owner>
<nova:root type="image" uuid="9e22baf9-71da-49bb-8edf-be0cc09bc8c3"/>
</nova:instance>
</metadata>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<cputune>
<shares>2048</shares>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>RDO</entry>
<entry name='product'>OpenStack Compute</entry>
<entry name='version'>19.1.0-1.el7</entry>
<entry name='serial'>92b28257-b5b6-41a4-aebc-9726358d7015</entry>
<entry name='uuid'>92b28257-b5b6-41a4-aebc-9726358d7015</entry>
<entry name='family'>Virtual Machine</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Nehalem-IBRS</model>
<vendor>Intel</vendor>
<topology sockets='2' cores='1' threads='1'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='tsc-deadline'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='rdtscp'/>
</cpu>
<clock offset='utc'>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='2b706e33-609e-4542-9cc5-1a01703a292f'/>
</auth>
<source protocol='rbd' name='vms/92b28257-b5b6-41a4-aebc-9726358d7015_disk'>
<host name='10.1.36.11' port='6789'/>
<host name='10.1.36.12' port='6789'/>
<host name='10.1.36.13' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='bridge'>
<mac address='fa:16:3e:b8:80:be'/>
<source bridge='brq23348359-07'/>
<target dev='tap5d1d3450-68'/>
<model type='virtio'/>
<mtu size='1500'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='fa:16:3e:32:b7:3c'/>
<source bridge='brq4a974777-fd'/>
<target dev='tap43d4a7a0-f7'/>
<model type='virtio'/>
<mtu size='1450'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/1'/>
<log file='/var/lib/nova/instances/92b28257-b5b6-41a4-aebc-9726358d7015/console.log' append='off'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<log file='/var/lib/nova/instances/92b28257-b5b6-41a4-aebc-9726358d7015/console.log' append='off'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='10'/>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
https://www.cnblogs.com/sammyliu/p/4804037.html(理解 OpenStack + Ceph (1):Ceph + OpenStack 集群部署和配置)
- 13.生产环境中的 redis 是怎么部署的?
作者:中华石杉 面试题 生产环境中的 redis 是怎么部署的? 面试官心理分析 看看你了解不了解你们公司的 redis 生产集群的部署架构,如果你不了解,那么确实你就很失职了,你的 redis 是主 ...
- Asp.Net Core 程序部署到Linux(centos)生产环境(一):普通部署
运行环境 照例,先亮底 centos:7.2 cpu:1核 2G内存 1M带宽 辅助工具:xshell xftp 搭建.net core运行环境 .net core 的运行环境我单独写了一篇,请看我的 ...
- Asp.Net Core 程序部署到Linux(centos)生产环境(二):docker部署
运行环境 照例,先亮环境:软件的话我这里假设你已经批准好了.net core 运行环境,未配置可以看我的这篇[linux(centos)搭建.net core 运行环境] 腾讯云 centos:7.2 ...
- 搭建Hadoop集群(生产环境)
1.搭建之前:百度copy一下介绍 (本博客几乎全都是生产环境的配置..包括mongo等hbase其他) Hadoop是一个由Apache基金会所开发的分布式系统基础架构. 用户可以在不了解分布式底层 ...
- 生产环境中的 redis 是怎么部署的
redis cluster,10 台机器,5 台机器部署了 redis 主实例,另外 5 台机器部署了 redis 的从实例,每个主实例挂了一个从实例,5 个节点对外提供读写服务,每个节点的读写高峰q ...
- 面试系列20 生产环境中的redis是怎么部署的
redis cluster,10台机器,5台机器部署了redis主实例,另外5台机器部署了redis的从实例,每个主实例挂了一个从实例,5个节点对外提供读写服务,每个节点的读写高峰qps可能可以达到每 ...
- 生产环境中的redis是怎么部署的?
redis cluster,10台机器,5台机器部署了redis主实例,另外5台机器部署了redis的从实例,每个主实例挂了一个从实例,5个节点对外提供读写服务,每个节点的读写高峰qps可能可以达到每 ...
- Ubuntu构建LVS+Keepalived高可用负载均衡集群【生产环境部署】
1.环境说明: 系统版本:Ubuntu 14.04 LVS1物理IP:14.17.64.2 初始接管VIP:14.17.64.13 LVS2物理IP:14.17.64.3 初始接管VIP:14 ...
- 生产环境一键创建kafka集群
前段时间公司的一个kafka集群出现了故障,由于之前准备不足,当时处理的比较慌乱.如:由于kafka的集群里topic数量较多,并且每个topic的分区数量和副本数量都不是一样的,如果按部就班的一个一 ...
随机推荐
- word-结构图
公司单位上下级结构图 总经理 助理 副总经理 财务总监 财务部 人事部 行政部 出口部 进口部 运营总监 储运部 信息部 首先将内容按照上下级排序正确 插入-SmartArt-根据需要选择图形,以上内 ...
- PHP 获取本周、今日、本月的起始时间戳
当前周的开始时间(周一)$begintime = mktime(0, 0, 0, date('m'), (date('d') - (date('w')>0 ? date('w') : 7) + ...
- python基础之操作列表
遍历元素 magicians = ['alice','david','carolina'] for magician in magicians: print(magician) magicians = ...
- Powermockito 针对方法中new 对象的模拟,以及属性中new 对象的模拟
PowerMocker 是一个功能逆天的mock 工具. 一,Powermockito 针对方法中new 对象的模拟 // 如何才能mock掉 WeChatConfigUtil 这个类,让 weCha ...
- Java蓝桥杯——排列组合
排列组合介绍 排列,就是指从给定n个数的元素中取出指定m个数的元素,进行排序. 组合,则是指从给定n个数的元素中仅仅取出指定m个数的元素,不考虑排序. 全排列(permutation) 以数字为例,全 ...
- 啊这......蚂蚁金服被暂缓上市,员工的大house没了?
没有想到,网友们前两天才对蚂蚁员工人均一套大 House羡慕嫉妒恨,这两天又因为蚂蚁金服被叫停惋惜.小编看了一下上一篇的时间,正好是11月3日晚上被叫停.太难了! 这中间出现了什么变故呢?原本 ...
- 测试中:ANR是什么
1.ANR 的定义 ANR(Application Not Responding),用户可以选择"等待"而让程序继续运行,也可以选择"强制关闭".所以一个流畅的 ...
- 用PyCharm打个专业的招呼
PyCharm 是什么 PyCharm(读作"拍恰姆")是 JetBrains 全家桶中的一员,专门用来写 Python 的: 官方网址是: https://www.jetbrai ...
- Unity全局调用非静态函数
Unity全局调用非静态函数 情形 大概就是做游戏的时候想做一个给玩家展示信息的东西,比如玩家按了不该按的键提醒一下之类的.这个脚本倒是很简单,找个Text组件往上面加字就行了.问题在于这个脚本游戏中 ...
- argparse使用范例
if __name__ == "__main__": # https://docs.python.org/zh-cn/dev/library/argparse.html impor ...