OpenStack实战

准备环境

controller 10.0.0.11

compute1   10.0.0.31

常用服务端口

mariadb:3306

memcached:11211

消息队列:5672和25672

时间同步:123和323

keystone:5000和35357

glance:9191和9292

nova:6080,novncproxy:8774,nova-api:8775

yum源配置

cd /etc/yum.repos.d/
ls
mkdir qiangge
mv *.repo qiangge
ls
echo '[openstack]
name=openstack
baseurl=http://192.168.21.92/repo/ gpgcheck=0 [local]
name=local
baseurl=http://192.168.21.92/local/ gpgcheck=0' >openstack.repo
yum clean all
yum makecache

时间同步

controller上面配置一个时间服务器,上游时间,ntp3.aliyun.com

allow:10/8

compute1与controller同步 上游时间:controller

在所有节点安装chrony服务

yum install chrony -y

controller上

编辑/etc/chrony.conf文件修改内容如下

修改一:第3行:   server ntp3.aliyun.com iburst
修改二:第22行: allow 10/8

启动chronyd

systemctl restart chronyd
systemctl enable chronyd

compute1

编辑/etc/chrony.conf文件修改内容如下

修改一:第3行:server controller iburst

启动chronyd

systemctl restart chronyd
systemctl enable chronyd

安装openstack包

生产环境(安装yum仓库)

yum -y install centos-release-openstack-mitaka

注意:本次实战(自检yum源)

安装 OpenStack 客户端:

yum install python-openstackclient -y

yum install openstack-selinux -y

安装mariadb数据库

cotroller节点上

安装mariadb数据库

yum install mariadb mariadb-server python2-PyMySQL

编辑 /etc/my.cnf.d/openstack.cnf

[mysqld]
...
bind-address = 10.0.0.11 default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8'

启动mariadb

systemctl enable mariadb.service
systemctl start mariadb.service

为了保证数据库服务的安全性,运行mysql_secure_installation脚本。特别需要说明的是,为数据库的root用户设置一个适当的密码

mysql_secure_installation

安装消息队列

controller节点

安装rabbitmq消息队列

yum install rabbitmq-server

启动消息队列服务

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

添加openstack 用户

rabbitmqctl add_user openstack RABBIT_PASS

给openstack用户配置写和读权限

 rabbitmqctl set_permissions openstack ".*" ".*" ".*"

安装Memcahed

controller节点

安装memcahed

yum install memcached python-memcached

编辑/etc/sysconfig/memcached

OPTIONS="-l 10.0.0.11,::1"

启动Memcached服务

systemctl enable memcached.service
systemctl start memcached.service

认证服务

controller节点

创建 keystone 数据库:

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

安装keystone

yum install openstack-keystone httpd mod_wsgi

编辑文件/etc/keystone/keystone.conf配置文件

cp /etc/keystone/keystone.conf{,.bak}
egrep -v "^$|#" /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
md5sum /etc/keystone/keystone.conf

初始化身份认证服务的数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

初始化Fernet keys

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

编辑/etc/httpd/conf/httpd.conf文件,配置ServerName选项为控制节点

ServerName controller

用下面的内容创建文件 /etc/httpd/conf.d/wsgi-keystone.conf

Listen 5000
Listen 35357 <VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost> <VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

启动 Apache HTTP 服务并配置其随系统启动

systemctl enable httpd.service
systemctl start httpd.service

配置认证令牌

export OS_TOKEN=ADMIN_TOKEN
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

创建服务实体和API端点

创建服务实体和身份认证服务

openstack service create \
--name keystone --description "OpenStack Identity" identity

创建认证服务的API端点

openstack endpoint create --region RegionOne identity public http://controller:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

检测认证服务API端是否创建成功

openstack endpoint list

提示:删除一个api端 openstack endpoint delete 后面接ID

创建域、项目、用户和角色

创建`default

openstack domain create --description "Default Domain" default

创建admin项目

openstack project create --domain default --description "Admin Project" admin

创建admin用户:

openstack user create --domain default   --password ADMIN_PASS admin

创建admin角色:

 openstack role create admin

添加admin角色到admin项目和用户上:

openstack role add --project admin --user admin admin

检查域、项目、用户、角色是否创建成功

openstack domain list
openstack project list
openstack user list
openstack role list

如果用户密码设置错了

第一步,删除这个用户openstack user delete 4efd63361fe14a8b9c5476f3957f6cb9

第二步:openstack user create --domain default --password ADMIN_PASS admin

第三步:openstack role add --project admin --user admin admin

创建service项目

openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password DEMO_PASS demo
openstack role create user
openstack role add --project demo --user demo user

验证操作

重置OSTOKEN和OSURL环境变量

 unset OS_TOKEN OS_URL

作为 admin 用户,请求认证令牌

 openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default  --os-project-name admin --os-username admin token issue

作为demo用户,请求认证令牌

openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue

创建 OpenStack 客户端环境脚本

编辑文件 admin-openrc 并添加如下内容

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

编辑文件 demo-openrc 并添加如下内容

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

加载admin-openrc文件来身份认证服务的环境变量位置和admin项目和用户证书

. admin-openrc

请求认证令牌

openstack token issue

镜像服务

controller节点

创建数据库

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

获得admin凭证来获取只有管理员能执行的命令的访问权限

. admin-openrc

创建 glance 用户

openstack user create --domain default --password GLANCE_PASS glance

添加 admin 角色到 glance 用户和 service 项目上。

openstack role add --project service --user glance admin

创建glance服务实体

openstack service create --name glance --description "OpenStack Image" image

创建镜像服务的 API 端点:

openstack endpoint create --region RegionOne image public http://controller:9292

检查

openstack endpoint list
openstack service list
openstack user list

安装glance组件包

 yum install openstack-glance

编辑文件/etc/glance/glance-api.conf配置文件

cp /etc/glance/glance-api.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
#cat glance-api.conf >/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

编辑文件/etc/glance/glance-registry.conf配置文件

cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
#cat glance-registry.conf >/etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

写入镜像服务数据库

su -s /bin/sh -c "glance-manage db_sync" glance

启动镜像服务并设置开机启动

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

检查服务是否启动

netstat -tunlp|grep 9[12]
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 26688/python2
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 26689/python2

获得 admin 凭证来获取只有管理员能执行的命令的访问权限

. admin-openrc

下载源镜像

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

使用QCOW2 磁盘格式,bare 容器格式上传镜像到镜像服务并设置公共可见,这样所有的项目都可以访问它

openstack image create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--publc

确认镜像的上传并验证属性

openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 515cace5-b22b-4d41-b3ae-e14b2eebffe9 | cirros | active |
+--------------------------------------+--------+--------+

计算服务

controller

创建 nova_api 和 nova 数据库

CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

获得admin凭证来获取只有管理员能执行的命令的访问权限

. admin-openrc

创建 nova 用户

openstack user create --domain default --password NOVA_PASS nova

给 nova 用户添加 admin 角色

openstack role add --project service --user nova admin

创建 nova 服务实体

openstack service create --name nova --description "OpenStack Compute" compute

创建 Compute 服务 API 端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

安装nova组件

 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler

编辑/etc/nova/nova.conf配置文件

cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
#cat nova.conf >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.11
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'

同步Compute 数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

注解

忽略输出中任何不推荐使用的信息。

启动 Compute 服务并将其设置为随系统启动

systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

compute1节点

安装nova组件

yum install openstack-nova-compute

编辑/etc/nova/nova.conf配置文件

yum install openstack-utils.noarch -y
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.31
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html

确定您的计算节点是否支持虚拟机的硬件加速

egrep -c '(vmx|svm)' /proc/cpuinfo

如果这个命令返回了 one or greater 的值,那么你的计算节点支持硬件加速且不需要额外的配置。

如果这个命令返回了 zero 值,那么你的计算节点不支持硬件加速。你必须配置 libvirt 来使用 QEMU 去代替 KVM

在 /etc/nova/nova.conf 文件的 [libvirt] 区域做出如下的编辑:

[libvirt]
...
virt_type = qemu

启动计算服务及其依赖,并将其配置为随系统自动启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

验证操作

controller节点

获得 admin 凭证来获取只有管理员能执行的命令的访问权限

. admin-openrc

列出服务组件,以验证是否成功启动并注册了每个进程

openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2017-09-12T12:29:32.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2017-09-12T12:29:32.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2017-09-12T12:29:32.000000 |
| 7 | nova-compute | compute1 | nova | enabled | up | 2017-09-12T12:29:34.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

注解

该输出应该显示三个服务组件在控制节点上启用,一个服务组件在计算节点上启用

上述涉及服务的服务启动命令

systemctl restart chronyd
systemctl restart mariadb
systemctl restart rabbitmq-server
systemctl restart memcached
systemctl restart httpd
systemctl restart openstack-glance-api openstack-glance-registry
systemctl restart openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

启动rabbitmq的管理插件

 rabbitmq-plugins enable rabbitmq_management

网络服务

controller节点

创建neutron数据库

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';

获得admin凭证来获取只有管理员能执行的命令的访问权限

. admin-openrc

创建neutron用户:

openstack user create --domain default --password NEUTRON_PASS neutron

添加admin角色到neutron 用户

openstack role add --project service --user neutron admin

创建neutron服务实体:

openstack service create --name neutron --description "OpenStack Networking" network

创建网络服务API端点

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

配置公共网络选项

在controller节点上安装并配置网络组件

安装网络组件

 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

编辑/etc/neutron/neutron.conf文件

cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS

编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件

cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False

编辑/etc/neutron/dhcp_agent.ini文件

openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

编辑/etc/neutron/metadata_agent.ini文件

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip  controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

编辑/etc/nova/nova.conf文件

openstack-config --set   /etc/nova/nova.conf   neutron  urlhttp://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_urlhttp://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_typepassword
openstack-config --set /etc/nova/nova.conf neutron project_domain_namedefault
openstack-config --set /etc/nova/nova.conf neutron user_domain_namedefault
openstack-config --set /etc/nova/nova.conf neutron region_nameRegionOne
openstack-config --set /etc/nova/nova.conf neutron project_nameservice
openstack-config --set /etc/nova/nova.conf neutron usernameneutron
openstack-config --set /etc/nova/nova.conf neutron passwordNEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxyTrue
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secretMETADATA_SECRET

网络服务初始化脚本需要一个超链接/etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini。如果超链接不存在,使用下面的命令创建它

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算API服务

systemctl restart openstack-nova-api.service

启动 Networking 服务并配置它启动

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

compute节点

安装网络组件

yum install openstack-neutron-linuxbridge ebtables ipset

编辑/etc/neutron/neutron.conf文件

cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini配置文件

scp controller:/etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini

编辑/etc/nova/nova.conf文件

openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS

重启计算服务

systemctl restart openstack-nova-compute.service

启动Linuxbridge代理并配置它开机自启动

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service

验证操作

获得admin凭证来获取只有管理员能执行的命令的访问权限

. admin-openrc

列出加载的扩展来验证neutron-server进程是否正常启动

neutron ext-list
neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 64c984ab-1adf-4c24-872c-d86adea2d5a9 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent |
| b8b44853-14bd-4cb8-b4ef-c8102769a855 | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
| bed6cc6d-fd7e-4748-88cd-c68ed21e590d | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
| d68b0220-181e-48c6-8dec-3bfc1b71afab | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

Dashboard

contorller

安装软件包

yum install openstack-dashboard

编辑/etc/openstack-dashboard/local_settings文件

在 controller 节点上配置仪表盘以使用 OpenStack 服务:

OPENSTACK_HOST = "controller"

允许所有主机访问仪表板:

ALLOWED_HOSTS = ['*', ]

配置 memcached 会话存储服务:

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

启用第3版认证API:

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

启用对域的支持

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

配置API版本:

OPENSTACKAPIVERSIONS = { "identity": 3, "image": 2, "volume": 2, } 通过仪表盘创建用户时的默认域配置为 default :

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"

通过仪表盘创建的用户默认角色配置为 user :

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

如果您选择网络参数1,禁用支持3层网络服务:

OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}

可以选择性地配置时区:

TIME_ZONE = "Aisa/Shanghai"

重启web服务器以及会话存储服务

systemctl restart httpd.service memcached.service

验证操作

在浏览器中输入http://controller/dashboard访问仪表盘。

验证使用admin或者demo用户凭证和default域凭证。

启动实例

创建提供者网络

在控制节点上,加载 admin 凭证来获取管理员能执行的命令访问权限

. admin-openrc

创建网络

neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider

创建子网

neutron subnet-create --name provider --allocation-pool start=10.0.0.101,end=10.0.0.250  --dns-nameserver 223.5.5.5 --gateway 10.0.0.254 provider 10.0.0.0/24

检查验证

neutron net-list
neutron subnet-list

创建m1.nano规格的主机

使用m1.nano规格的主机来加载CirrOS镜像

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
openstack flavor list

生成和添加秘钥对:

ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

验证公钥的添加

openstack keypair list

添加规则到 default 安全组

允许 ICMP (ping):

openstack security group rule create --proto icmp default

允许安全 shell (SSH) 的访问:

openstack security group rule create --proto tcp --dst-port 22 default

openstack部署安装的更多相关文章

  1. Centos8最小化部署安装OpenStack Ussuri

    #!/bin/bash #Centos8最小化部署安装OpenStack Ussuri #共两台主机,分别是一台控制节点,一台计算节点 #.控制节点内存4096M.双网卡,分别为eth0:10.0.0 ...

  2. Openstack部署工具

    Openstack发展很猛,很多朋友都很认同,2013年,会很好的解决OpenStack部署的问题,让安装,配置变得更加简单易用. 很多公司都投入人力去做这个,新浪也计划做一个Openstack的is ...

  3. 怎样在两小时内搞定 OpenStack 部署?(转)

    怎样在两小时内搞定 OpenStack 部署? OpenStack的安装是一个难题,组件众多,非常麻烦.如果手工部署OpenStack,可能需要好几天,使用RDO,就是几个命令,再加一两个小时的等待. ...

  4. 《OpenStack部署实践》

    <OpenStack部署实践> 基本信息 作者: 张子凡 丛书名: 图灵原创 出版社:人民邮电出版社 ISBN:9787115346797 上架时间:2014-2-27 出版日期:2014 ...

  5. OpenStack部署博客推荐

    OpenStack部署推荐博客 shhnwangjian https://www.cnblogs.com/shhnwangjian/category/942049.html(推荐) 点评: 1.实现过 ...

  6. OpenStack部署到Hadoop的四种方案

    随着企业開始同一时候利用云计算和大数据技术.如今应当考虑怎样将这些工具结合使用.在这样的情况下,企业将实现最佳的分析处理能力.同一时候利用私有云的高速弹性 (rapid elasticity) 和单一 ...

  7. openstack部署工具简介

    个人使用方面DevStack无疑,在可预见的未来时间内,DevStack仍将是众多开发者们的首选安装方式或工具.该方式主要是通过配置参数,执行shell脚本来安装一个OpenStack的开发环境.Gi ...

  8. Kolla 让 OpenStack 部署更贴心

    目录 目录 Kolla 简介 Kolla & Kolla-ansible 部署 OpenStack 准备操作系统基础环境 准备 Python 基础环境 准备 Docker 基础环境 安装 ko ...

  9. install-newton部署安装--------计算节点部署安装

    #################################################################################################### ...

随机推荐

  1. Thrift报错:Error: Thrift compiler: Failed to translate files. Error: Cannot run program thrift error=2

    文章目录 报错: 原因: 解决: 报错: Error: Thrift compiler: Failed to translate files. Error: Cannot run program th ...

  2. python读取excel保存到mysql

    首先安装xlrd模块:pip install xlrd ,核心代码网上有很多,这里主要是关于一些个人实际碰到问题细节的处理 1.excel数据不规范导致读取的数据存在空白行和列: 2.参数化执行sql ...

  3. Oracle基础数据类型与运算符

    Oracle基础数据类型: 1. 字符型:字符串 char(最大2000), nchar(最大1000, 支持                           Unicode)--->固定长 ...

  4. tar.xz 解压

    解压tar.xz文件:先 xz -d xxx.tar.xz 将 xxx.tar.xz解压成 xxx.tar 然后,再用 tar xvf xxx.tar来解包. xz -d Python-3.7.1.t ...

  5. docker 实战

    创建镜像 docker pull ubuntu 创建容器 docker run -it -name web ubuntu /bin/bash 更新软件源信息 apt-get update 安装ssh  ...

  6. ssh 免密码登录实现批量处理

    搭建集群的时候ssh 免密码登录是一个问题以下脚本将实现批量处理 文件1主机名:host 17.19.18.11:12317.19.18.12:123 文件2:ssh_setup.py #!/usr/ ...

  7. SpringMVC表单验证器

    本章讲解SpringMVC中怎么通过注解对表单参数进行验证. SpringBoot配置 使用springboot,spring-boot-starter-web会自动引入hiberante-valid ...

  8. ionic3中使用docker 完成build代码,更新过程记录。

    1.若未安装cordova 需先安装cordova 包: npm install -g cordova 2.安装docker 可查看官方文档进行一步步的安装:https://docs.docker.c ...

  9. IronPython C#与Python相互调用

    ironphy  microsoft.scripting dll using System;using System.Collections.Generic;using System.Linq;usi ...

  10. 如何限制只有某些IP才能使用Tomcat Manager

    只有指定的主机或IP地址才可以访问部署在Tomcat下的应用.Tomcat提供了两个参数供你配置:RemoteHostValve 和RemoteAddrValve,前者用于限制主机名,后者用于限制IP ...