environment

1.网络平面

management(管理网络)→软件安装,组件通信

provider(提供实例网络)→:提供者网络:直接获取ip地址,实例之间直接互通

               自服务网络(私有网络):创建虚拟网络→创建路由器←设置公有网络网关

                           ————————————————————→内网到外网转发

2.NTP时间服务(集群必备)

【controller node】

1.Install the packages

  1. yum install chrony -y

2.Edit the chrony.conf file and add, change, or remove the following keys as necessary for your environment

  1. vim /etc/chrony.conf

3.Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP server

  1. server NTP_SERVER iburst

4.To enable other nodes to connect to the chrony daemon on the controller node

  1. allow 10.199.100.0/24

5.Restart the NTP service

  1. systemctl enable chronyd.service;systemctl restart chronyd.service

(1)code

  1. yum install chrony -y
  2. sed -i '/^server/s/server/#server/' /etc/chrony.conf
    sed -i '2a server ntp7.aliyun.com iburst' /etc/chrony.conf
    sed -i '/^#allow/a allow 10.199.100.0/24' /etc/chrony.conf
    systemctl enable chronyd.service;systemctl restart chronyd.service

【other nodes】

1.Install the packages

  1. yum install chrony -y

2.Configure the chrony.conf file and comment out or remove all but one server key

  1. vim /etc/chrony.conf

3.Change it to reference the controller node

  1. server controller iburst

4.Restart the NTP service

  1. systemctl enable chronyd.service;systemctl restart chronyd.service

(2)code

  1. yum install chrony -y
  2. sed -i '/^server/s/server/#server/' /etc/chrony.conf
    sed -i '2a server controller iburst' /etc/chrony.conf
    systemctl enable chronyd.service;systemctl restart chronyd.service

【verify operation】

1.Run this command on the all nodes

  1. chronyc sources
  1. chronyc sources

3.openstack安装包,启用openstack库

1.Install the package to enable the OpenStack repository

  1. yum install centos-release-openstack-train -y

2.Upgrade the packages on all nodes

  1. yum upgrade

3.Install the OpenStack client

  1. yum install python-openstackclient -y

(3)code

  1. yum install centos-release-openstack-train -y
    yum install python-openstackclient -y
  2. yum upgrade

4.SQL数据库

1.Install the packages

  1. yum install mariadb mariadb-server python2-PyMySQL -y

2.Create and edit the /etc/my.cnf.d/openstack.cnf file (backup existing configuration files in /etc/my.cnf.d/ if needed)

  1. vim /etc/my.cnf.d/openstack.cnf

3.Start the database service and configure it to start when the system boots

  1. systemctl enable mariadb.service;systemctl restart mariadb.service

4.Secure the database service by running the mysql_secure_installation script

  1. mysql_secure_installation

(4)code

  1. yum install mariadb mariadb-server python2-PyMySQL -y
  2. cat <<EOF> /etc/my.cnf.d/openstack.cnf
    [mysqld]
  3. bind-address = 10.1.10.151
  4. default-storage-engine = innodb
  5. innodb_file_per_table = on
  6. max_connections = 4096
  7. collation-server = utf8_general_ci
  8. character-set-server = utf8
    EOF
    systemctl enable mariadb.service;systemctl restart mariadb.service
  9. mysql_secure_installation

5.消息队列:协调组件之间操作和状态信息

1.Install the package

  1. yum install rabbitmq-server -y

2.Start the message queue service and configure it to start when the system boots

  1. systemctl enable rabbitmq-server.service;systemctl restart rabbitmq-server.service

3.Add the openstack user

  1. rabbitmqctl add_user openstack RABBIT_PASS  ##Replace RABBIT_PASS with a suitable password

4.Permit configuration, write, and read access for the openstack user

  1. rabbitmqctl set_permissions openstack ".*" ".*" ".*"

(5)code

  1. yum install rabbitmq-server -y
  2. systemctl enable rabbitmq-server.service;systemctl restart rabbitmq-server.service
  3. rabbitmqctl add_user openstack RABBIT_PASS
  4. rabbitmqctl set_permissions openstack ".*" ".*" ".*"

6.Memcached(存放token)

1.Install the packages

  1. yum install memcached python-memcached -y

2.Edit the /etc/sysconfig/memcached file and complete the following actions

  1. OPTIONS="-l 127.0.0.1,::1,controller"  ##Change the existing line OPTIONS="-l 127.0.0.1,::1"

3.Start the Memcached service and configure it to start when the system boots

  1. systemctl enable memcached.service;systemctl restart memcached.service

(6)code

  1. yum install memcached python-memcached -y
  2. sed -i '/^OPTIONS=/cOPTIONS="-l 127.0.0.1,::1,controller"' /etc/sysconfig/memcached
    systemctl enable memcached.service;systemctl restart memcached.service

7.Etcd

1.Install the package

  1. yum install etcd -y

2.Edit the /etc/etcd/etcd.conf file and set

  1. vim /etc/etcd/etcd.conf
  2. #[Member]
  3. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  4. ETCD_LISTEN_PEER_URLS="http://10.199.100.191:2380"
  5. ETCD_LISTEN_CLIENT_URLS="http://10.199.100.191:2379"
  6. ETCD_NAME="controller"
  7. #[Clustering]
  8. ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.199.100.191:2380"
  9. ETCD_ADVERTISE_CLIENT_URLS="http://10.199.100.191:2379"
  10. ETCD_INITIAL_CLUSTER="controller=http://10.199.100.191:2380"
  11. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
  12. ETCD_INITIAL_CLUSTER_STATE="new"

3.Enable and start the etcd service

  1. systemctl enable etcd;systemctl restart etcd

(7)code

  1. yum install etcd -y
  2. sed -i '/ETCD_DATA_DIR=/cETCD_DATA_DIR="/var/lib/etcd/default.etcd"' /etc/etcd/etcd.conf
    sed -i '/ETCD_LISTEN_PEER_URLS=/cETCD_LISTEN_PEER_URLS="http://10.199.100.191:2380"' /etc/etcd/etcd.conf
    sed -i '/ETCD_LISTEN_CLIENT_URLS=/cETCD_LISTEN_CLIENT_URLS="http://10.199.100.191:2379"' /etc/etcd/etcd.conf
    sed -i '/ETCD_NAME=/cETCD_NAME="controller"' /etc/etcd/etcd.conf
    sed -i '/ETCD_INITIAL_ADVERTISE_PEER_URLS=/cETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.199.100.191:2380"' /etc/etcd/etcd.conf
    sed -i '/ETCD_ADVERTISE_CLIENT_URLS=/cETCD_ADVERTISE_CLIENT_URLS="http://10.199.100.191:2379"' /etc/etcd/etcd.conf
    sed -i '/ETCD_INITIAL_CLUSTER=/cETCD_INITIAL_CLUSTER="controller=http://10.199.100.191:2380"' /etc/etcd/etcd.conf
    sed -i '/ETCD_INITIAL_CLUSTER_TOKEN=/cETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"' /etc/etcd/etcd.conf
    sed -i '/ETCD_INITIAL_CLUSTER_STATE=/cETCD_INITIAL_CLUSTER_STATE="new"' /etc/etcd/etcd.conf
    systemctl enable etcd;systemctl restart etcd

keystone

1.安装并配置组件

【创库授权】

1.Use the database access client to connect to the database server as the root user

  1. mysql -u root -p

2.Create the keystone database

  1. MariaDB [(none)]> CREATE DATABASE keystone;

3.Grant proper access to the keystone database

  1. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
  2. MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

【Install and configure components】

4.install the packages openstack-keystone httpd(基于http对外提供服务) mod_wsgi(python应用和web服务中间件,支持python应用部署到web服务上)

  1. yum install openstack-keystone httpd mod_wsgi -y

5.Edit the /etc/keystone/keystone.conf file and complete the following actions

  1. [database]
    # ...
    connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
  1. [token]
  2. # ...
  3. provider = fernet

6.Populate the Identity service database

  1. su -s /bin/sh -c "keystone-manage db_sync" keystone

7.Initialize Fernet key repositories

  1. keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  2. keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

8.Bootstrap the Identity service

  1. keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  2. --bootstrap-admin-url http://controller:5000/v3/ \
  3. --bootstrap-internal-url http://controller:5000/v3/ \
  4. --bootstrap-public-url http://controller:5000/v3/ \

【Configure the Apache HTTP server】

9.Edit the /etc/httpd/conf/httpd.conf file and configure

  1. ServerName controller

10.Create a link to the /usr/share/keystone/wsgi-keystone.conf file

  1. ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

【Finalize the installation】

11.Start the Apache HTTP service and configure it to start when the system boots

  1. systemctl enable httpd.service;systemctl restart httpd.service

12.Configure the administrative account by setting the proper environmental variables

  1. export OS_USERNAME=admin
    export OS_PASSWORD=ADMIN_PASS
    export OS_PROJECT_NAME=admin
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_AUTH_URL=http://controller:5000/v3
    export OS_IDENTITY_API_VERSION=3

(8)code

  1. mysql -u root -p1234qwer
  2. CREATE DATABASE keystone;
  3. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
  4. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
  5. quit
  6. yum install openstack-keystone httpd mod_wsgi -y
  7. sed -i -e '/^connection/s/connection/#connection/' -e '/^provider/s/provider/#provider/' /etc/keystone/keystone.conf
  8. sed -i '/^#connection/a connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone' /etc/keystone/keystone.conf
  9. sed -i '/^#provider/a provider = fernet' /etc/keystone/keystone.conf
  10. su -s /bin/sh -c "keystone-manage db_sync" keystone
  11. keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  12. keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  13. keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  14. --bootstrap-admin-url http://controller:5000/v3/ \
  15. --bootstrap-internal-url http://controller:5000/v3/ \
  16. --bootstrap-public-url http://controller:5000/v3/ \
  17. --bootstrap-region-id RegionOne
  18. sed -i -e '/^ServerName/s/ServerName/#ServerName/' /etc/httpd/conf/httpd.conf
  19. sed -i '/^#ServerName/a ServerName controller' /etc/httpd/conf/httpd.conf
  20. ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
  21. systemctl enable httpd.service;systemctl restart httpd.service

2.创建域,项目,用户和角色

1.Although the “default” domain already exists from the keystone-manage bootstrap step in this guide, a formal way to create a new domain would be

  1. openstack domain create --description "An Example Domain" example

2.This guide uses a service project that contains a unique user for each service that you add to your environment. Create the service project

  1. openstack project create --domain default --description "Service Project" service

3.Regular (non-admin) tasks should use an unprivileged project and user. As an example, this guide creates the myproject project and myuser user

  1. openstack project create --domain default --description "Demo Project" myproject  ##Create the myproject project
  1. openstack user create --domain default --password-prompt myuser  ##Create the myuser user
  1. openstack role create myrole  ##Create the myrole role
  1. openstack role add --project myproject --user myuser myrole  ##Add the myrole role to the myproject project and myuser user

(创建domain,project,user,role,给user赋予role权限)

  1. openstack domain create --description "An Example Domain" example
    openstack project create --domain default --description "Demo Project" myproject
  2. openstack user create --domain default --password DEMO_PASS myuser
  3. openstack role create myrole
  4. openstack role add --project myproject --user myuser myrole

3.验证:请求认证令牌

1.Unset the temporary OS_AUTH_URL and OS_PASSWORD environment variable

  1. unset OS_AUTH_URL OS_PASSWORD

2.As the admin user, request an authentication token

  1. openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue

3.As the myuser user created in the previous section, request an authentication token

  1. openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name myproject --os-username myuser token issue

4.创建openstack客户端环境脚本

1.Create and edit the admin-openrc file and add the following content

  1. export OS_PROJECT_DOMAIN_NAME=Default
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_NAME=admin
    export OS_USERNAME=admin
    export OS_PASSWORD=ADMIN_PASS
    export OS_AUTH_URL=http://controller:5000/v3
    export OS_IDENTITY_API_VERSION=3
    export OS_IMAGE_API_VERSION=2

2.Create and edit the demo-openrc file and add the following content

  1. export OS_PROJECT_DOMAIN_NAME=Default
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_NAME=myproject
    export OS_USERNAME=myuser
    export OS_PASSWORD=DEMO_PASS
    export OS_AUTH_URL=http://controller:5000/v3
    export OS_IDENTITY_API_VERSION=3
    export OS_IMAGE_API_VERSION=2

3.Using the scripts

  1. . admin-openrc

(9)code

  1. cat <<EOF> /root/admin-openrc
  2. export OS_PROJECT_DOMAIN_NAME=Default
  3. export OS_USER_DOMAIN_NAME=Default
  4. export OS_PROJECT_NAME=admin
  5. export OS_USERNAME=admin
  6. export OS_PASSWORD=ADMIN_PASS
  7. export OS_AUTH_URL=http://controller:5000/v3
  8. export OS_IDENTITY_API_VERSION=3
  9. export OS_IMAGE_API_VERSION=2
  10. EOF
  11. cat <<EOF> /root/demo-openrc
  12. export OS_PROJECT_DOMAIN_NAME=Default
  13. export OS_USER_DOMAIN_NAME=Default
  14. export OS_PROJECT_NAME=myproject
  15. export OS_USERNAME=myuser
  16. export OS_PASSWORD=DEMO_PASS
  17. export OS_AUTH_URL=http://controller:5000/v3
  18. export OS_IDENTITY_API_VERSION=3
  19. export OS_IMAGE_API_VERSION=2
  20. EOF

glance

1.条件设置

1.创库授权

  1. CREATE DATABASE glance;
  2. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
  3. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

2.创建用户→创建glance用户

  1. openstack user create --domain default --password glance123 glance

赋权→赋予admin权限

  1. openstack role add --project admin --user glance admin

创建服务实体→创建glance service

  1. openstack service create --name glance --description "OpenStack Image" image

3.创建服务端点API:public

           internal

           admin

  1. openstack endpoint create --region RegionOne image public http://controller:9292
  2. openstack endpoint create --region RegionOne image internal http://controller:9292
  3. openstack endpoint create --region RegionOne image admin http://controller:9292

2.安装并配置组件

1.安装软件包

  1. yum install openstack-glance -y

2.修改配置文件

  1. [database]
    connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    [keystone_authtoken]
    www_authenticate_uri  = http://controller:5000
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = admin
    username = glance
    password = glance123
    [paste_deploy]
    flavor = keystone
    [glance_store]
    stores = file,http
    default_store = file
    filesystem_store_datadir = /var/lib/glance/images/

3.初始化数据库

  1. su -s /bin/sh -c "glance-manage db_sync" glance

4.启动服务

  1. systemctl enable openstack-glance-api.service;systemctl restart openstack-glance-api.service

(10)code

  1. mysql -u root -p1234qwer
  2. CREATE DATABASE glance;
  3. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
  4. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
  5. quit
  6. . /root/admin-openrc
  7. openstack user create --domain default --password glance123 glance
  8. openstack role add --project admin --user glance admin
  9. openstack service create --name glance --description "OpenStack Image" image
  10. openstack endpoint create --region RegionOne image public http://controller:9292
  11. openstack endpoint create --region RegionOne image internal http://controller:9292
  12. openstack endpoint create --region RegionOne image admin http://controller:9292
  13. yum install openstack-glance -y
  14. sed -i '/^\[database\]/a connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a password = glance123' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a username = glance' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a project_name = admin' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a user_domain_name = Default' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a project_domain_name = Default' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a auth_type = password' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a memcached_servers = controller:11211' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a auth_url = http://controller:5000' /etc/glance/glance-api.conf
    sed -i '/^\[keystone_authtoken\]/a www_authenticate_uri  = http://controller:5000' /etc/glance/glance-api.conf
    sed -i '/^\[paste_deploy\]/a flavor = keystone' /etc/glance/glance-api.conf
    sed -i '/^\[glance_store\]/a filesystem_store_datadir = /var/lib/glance/images/' /etc/glance/glance-api.conf
    sed -i '/^\[glance_store\]/a default_store = file' /etc/glance/glance-api.conf
    sed -i '/^\[glance_store\]/a stores = file,http' /etc/glance/glance-api.conf
  15. su -s /bin/sh -c "glance-manage db_sync" glance
  16. systemctl enable openstack-glance-api.service;systemctl restart openstack-glance-api.service

3.验证

1.openstack image create  ##注册镜像

  1. . admin-openrc
    wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public

2.openstack image list  ##查看镜像信息

  1. openstack image list

placement

1.条件设置

  1. CREATE DATABASE placement;
    GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
  2. GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
    openstack user create --domain default --password placement123 placement
    openstack role add --project admin --user placement admin
    openstack service create --name placement --description "Placement API" placement
    openstack endpoint create --region RegionOne placement public http://controller:8778
  3. openstack endpoint create --region RegionOne placement internal http://controller:8778
  4. openstack endpoint create --region RegionOne placement admin http://controller:8778

2.安装并配置组件

1.Install the packages

  1. yum install openstack-placement-api -y

2.Edit the /etc/placement/placement.conf file and complete the following actions

配置数据库访问

  1. [placement_database]
  2. # ...
  3. connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

配置keystone认证

  1. [api]
  2. # ...
  3. auth_strategy = keystone
  4.  
  5. [keystone_authtoken]
  6. # ...
  7. auth_url = http://controller:5000/v3
  8. memcached_servers = controller:11211
  9. auth_type = password
  10. project_domain_name = Default
  11. user_domain_name = Default
  12. project_name = service
  13. username = placement
  14. password = PLACEMENT_PASS

启用placement api访问

  1. adding the following configuration to /etc/httpd/conf.d/00-nova-placement-api.conf:
  1. <Directory /usr/bin>
  2. <IfVersion >= 2.4>
  3. Require all granted
  4. </IfVersion>
  5. <IfVersion < 2.4>
  6. Order allow,deny
  7. Allow from all
  8. </IfVersion>
  9. </Directory>

3.Populate the placement database

  1. su -s /bin/sh -c "placement-manage db sync" placement

4.启动服务

  1. systemctl restart httpd

(11)code

  1. mysql -u root -p1234qwer
  2. CREATE DATABASE placement;
  3. GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
  4. GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
    quit
    . /root/admin-openrc
  5. openstack user create --domain default --password placement123 placement
  6. openstack role add --project admin --user placement admin
  7. openstack service create --name placement --description "Placement API" placement
  8. openstack endpoint create --region RegionOne placement public http://controller:8778
  9. openstack endpoint create --region RegionOne placement internal http://controller:8778
  10. openstack endpoint create --region RegionOne placement admin http://controller:8778
  11. yum install openstack-placement-api -y
  12. sed -i '/^\[placement_database\]/a connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement' /etc/placement/placement.conf
  13. sed -i '/^\[api\]/a auth_strategy = keystone' /etc/placement/placement.conf
  14. sed -i '/^\[keystone_authtoken\]/a password = placement123' /etc/placement/placement.conf
  15. sed -i '/^\[keystone_authtoken\]/a username = placement' /etc/placement/placement.conf
  16. sed -i '/^\[keystone_authtoken\]/a project_name = admin' /etc/placement/placement.conf
  17. sed -i '/^\[keystone_authtoken\]/a user_domain_name = Default' /etc/placement/placement.conf
  18. sed -i '/^\[keystone_authtoken\]/a project_domain_name = Default' /etc/placement/placement.conf
  19. sed -i '/^\[keystone_authtoken\]/a auth_type = password' /etc/placement/placement.conf
  20. sed -i '/^\[keystone_authtoken\]/a memcached_servers = controller:11211' /etc/placement/placement.conf
  21. sed -i '/^\[keystone_authtoken\]/a auth_url = http://controller:5000/v3' /etc/placement/placement.conf
  22. cat <<EOF>> /etc/httpd/conf.d/00-nova-placement-api.conf
  23. <Directory /usr/bin>
  24. <IfVersion >= 2.4>
  25. Require all granted
  26. </IfVersion>
  27. <IfVersion < 2.4>
  28. Order allow,deny
  29. Allow from all
  30. </IfVersion>
  31. </Directory>
  32. EOF
  33. su -s /bin/sh -c "placement-manage db sync" placement
  34. systemctl restart httpd

3.验证

1.Perform status checks to make sure everything is in order

  1. placement-status upgrade check

2.Run some commands against the placement API

nova

controller node

1.条件设置

  1. CREATE DATABASE nova_api;
  2. CREATE DATABASE nova;
  3. CREATE DATABASE nova_cell0;
  4. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
  5. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
  6. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
  7. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
  8. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
  9. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
  10. openstack user create --domain default --password nova123 nova
  11. openstack role add --project admin --user nova admin
  12. openstack service create --name nova --description "OpenStack Compute" compute
  13. openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
  14. openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
  15. openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

2.安装并配置组件

1.安装软件包

openstack-nova-api openstack-nova-conductor(连接数据库) openstack-nova-console(访问控制台) openstack-nova-novncproxy(提供控制台服务) openstack-nova-scheduler(computer调度) openstack-nova-placement-api

  1. yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

2.Edit the /etc/nova/nova.conf file

配置api

  1. [DEFAULT]
    # ...
    enabled_apis = osapi_compute,metadata

配置数据库访问(database,api_database)

  1. [api_database]
  2. # ...
  3. connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
  4.  
  5. [database]
  6. # ...
  7. connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

配置rabbitmq

  1. [DEFAULT]
    # ...
    transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/

配置keystone认证

  1. [api]
  2. # ...
  3. auth_strategy = keystone
  4.  
  5. [keystone_authtoken]
  6. # ...
  7. www_authenticate_uri = http://controller:5000/
  8. auth_url = http://controller:5000/
  9. memcached_servers = controller:11211
  10. auth_type = password
  11. project_domain_name = Default
  12. user_domain_name = Default
  13. project_name = admin
  14. username = nova
  15. password = nova123

配置网络服务支持

  1. [DEFAULT]
  2. # ...
  3. use_neutron = true
  4. firewall_driver = nova.virt.firewall.NoopFirewallDriver

配置vnc代理

  1. [DEFAULT]
  2. ...
  3. my_ip = 10.1.10.151
  4.  
  5. [vnc]
  6. enabled = true
  7. # ...
  8. server_listen = $my_ip
  9. server_proxyclient_address = $my_ip

配置镜像api

  1. [glance]
  2. # ...
  3. api_servers = http://controller:9292

配置锁路径

  1. [oslo_concurrency]
  2. # ...
  3. lock_path = /var/lib/nova/tmp

配置placement service认证

  1. [placement]
  2. # ...
  3. region_name = RegionOne
  4. project_domain_name = Default
  5. project_name = admin
  6. auth_type = password
  7. user_domain_name = Default
  8. auth_url = http://controller:5000/v3
  9. username = placement
  10. password = placement123

3.初始化数据库

  1. su -s /bin/sh -c "nova-manage api_db sync" nova
  2. su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  3. su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
  4. su -s /bin/sh -c "nova-manage db sync" nova
  5. su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

4.启动服务

  1. systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  2. systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

(12)code

  1. mysql -u root -p1234qwer
  2. CREATE DATABASE nova_api;
  3. CREATE DATABASE nova;
  4. CREATE DATABASE nova_cell0;
  5. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
  6. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
  7. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
  8. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
  9. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
  10. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
  11. quit
  12. . /root/admin-openrc
  13. openstack user create --domain default --password nova123 nova
  14. openstack role add --project admin --user nova admin
  15. openstack service create --name nova --description "OpenStack Compute" compute
  16. openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
  17. openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
  18. openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
  19. yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
  20. sed -i '/^\[DEFAULT\]/a firewall_driver = nova.virt.firewall.NoopFirewallDriver' /etc/nova/nova.conf
  21. sed -i '/^\[DEFAULT\]/a use_neutron = true' /etc/nova/nova.conf
  22. sed -i '/^\[DEFAULT\]/a my_ip = 10.1.10.151' /etc/nova/nova.conf
  23. sed -i '/^\[DEFAULT\]/a transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/' /etc/nova/nova.conf
  24. sed -i '/^\[DEFAULT\]/a enabled_apis = osapi_compute,metadata' /etc/nova/nova.conf
  25. sed -i '/^\[api_database\]/a connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api' /etc/nova/nova.conf
  26. sed -i '/^\[database\]/a connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova' /etc/nova/nova.conf
  27. sed -i '/^\[api\]/a auth_strategy = keystone' /etc/nova/nova.conf
  28. sed -i '/^\[keystone_authtoken\]/a password = nova123' /etc/nova/nova.conf
  29. sed -i '/^\[keystone_authtoken\]/a username = nova' /etc/nova/nova.conf
  30. sed -i '/^\[keystone_authtoken\]/a project_name = admin' /etc/nova/nova.conf
  31. sed -i '/^\[keystone_authtoken\]/a user_domain_name = Default' /etc/nova/nova.conf
  32. sed -i '/^\[keystone_authtoken\]/a project_domain_name = Default' /etc/nova/nova.conf
  33. sed -i '/^\[keystone_authtoken\]/a auth_type = password' /etc/nova/nova.conf
  34. sed -i '/^\[keystone_authtoken\]/a memcached_servers = controller:11211' /etc/nova/nova.conf
  35. sed -i '/^\[keystone_authtoken\]/a auth_url = http://controller:5000/' /etc/nova/nova.conf
  36. sed -i '/^\[keystone_authtoken\]/a www_authenticate_uri = http://controller:5000/' /etc/nova/nova.conf
  37. sed -i '/^\[vnc\]/a server_proxyclient_address = $my_ip' /etc/nova/nova.conf
  38. sed -i '/^\[vnc\]/a server_listen = $my_ip' /etc/nova/nova.conf
  39. sed -i '/^\[vnc\]/a enabled = true' /etc/nova/nova.conf
  40. sed -i '/^\[glance\]/a api_servers = http://controller:9292' /etc/nova/nova.conf
  41. sed -i '/^\[oslo_concurrency\]/a lock_path = /var/lib/nova/tmp' /etc/nova/nova.conf
  42. sed -i '/^\[placement\]/a password = placement123' /etc/nova/nova.conf
  43. sed -i '/^\[placement\]/a username = placement' /etc/nova/nova.conf
  44. sed -i '/^\[placement\]/a auth_url = http://controller:5000/v3' /etc/nova/nova.conf
  45. sed -i '/^\[placement\]/a user_domain_name = Default' /etc/nova/nova.conf
  46. sed -i '/^\[placement\]/a auth_type = password' /etc/nova/nova.conf
  47. sed -i '/^\[placement\]/a project_name = admin' /etc/nova/nova.conf
  48. sed -i '/^\[placement\]/a project_domain_name = Default' /etc/nova/nova.conf
  49. sed -i '/^\[placement\]/a region_name = RegionOne' /etc/nova/nova.conf
  50. su -s /bin/sh -c "nova-manage api_db sync" nova
  51. su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  52. su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
  53. su -s /bin/sh -c "nova-manage db sync" nova
  54. su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
  55. systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  56. systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

3.验证

1.验证:openstack compute service list  ##查看服务组件

  1. openstack compute service list

2.List API endpoints in the Identity service to verify connectivity with the Identity service

  1. openstack catalog list

3.List images in the Image service to verify connectivity with the Image service

  1. openstack image list

4.Check the cells and placement API are working successfully and that other necessary prerequisites are in place

  1. su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    nova-status upgrade check
    openstack compute service list --service nova-compute

computer node

1.安装并配置组件

1.安装软件包

  1. yum install openstack-nova-compute -y

2.Edit the /etc/nova/nova.conf file

配置api

  1. [DEFAULT]
    # ...
    enabled_apis = osapi_compute,metadata

配置数据库访问(database,api_database)

  1. [api_database]
  2. # ...
  3. connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
  4.  
  5. [database]
  6. # ...
  7. connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

配置rabbitmq

  1. [DEFAULT]
    # ...
    transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/

配置keystone认证

  1. [api]
  2. # ...
  3. auth_strategy = keystone
  4.  
  5. [keystone_authtoken]
  6. # ...
  7. www_authenticate_uri = http://controller:5000/
  8. auth_url = http://controller:5000/
  9. memcached_servers = controller:11211
  10. auth_type = password
  11. project_domain_name = Default
  12. user_domain_name = Default
  13. project_name = admin
  14. username = nova
  15. password = nova123

配置网络服务支持

  1. [DEFAULT]
  2. # ...
  3. use_neutron = true
  4. firewall_driver = nova.virt.firewall.NoopFirewallDriver

配置vnc代理

  1. [DEFAULT]
  2. ...
  3. my_ip = 10.1.10.152
  4.  
  5. [vnc]
  6. # ...
    enabled = true
  7. server_listen = 0.0.0.0
  8. server_proxyclient_address = $my_ip
  9. novncproxy_base_url = http://controller:6080/vnc_auto.html  ##修改为ip地址以确保dashboard中可以打开实例控制台

配置镜像api

  1. [glance]
  2. # ...
  3. api_servers = http://controller:9292

配置锁路径

  1. [oslo_concurrency]
  2. # ...
  3. lock_path = /var/lib/nova/tmp

配置placement service认证

  1. [placement]
  2. # ...
  3. region_name = RegionOne
  4. project_domain_name = Default
  5. project_name = admin
  6. auth_type = password
  7. user_domain_name = Default
  8. auth_url = http://controller:5000/v3
  9. username = placement
  10. password = placement123

(13)code

  1. yum install openstack-nova-compute -y
  2. sed -i '/^\[DEFAULT\]/a firewall_driver = nova.virt.firewall.NoopFirewallDriver' /etc/nova/nova.conf
  3. sed -i '/^\[DEFAULT\]/a use_neutron = true' /etc/nova/nova.conf
  4. sed -i '/^\[DEFAULT\]/a my_ip = 10.1.10.152' /etc/nova/nova.conf
  5. sed -i '/^\[DEFAULT\]/a transport_url = rabbit://openstack:RABBIT_PASS@controller' /etc/nova/nova.conf
  6. sed -i '/^\[DEFAULT\]/a enabled_apis = osapi_compute,metadata' /etc/nova/nova.conf
  7. sed -i '/^\[api_database\]/a connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api' /etc/nova/nova.conf
  8. sed -i '/^\[database\]/a connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova' /etc/nova/nova.conf
  9. sed -i '/^\[api\]/a auth_strategy = keystone' /etc/nova/nova.conf
  10. sed -i '/^\[keystone_authtoken\]/a password = nova123' /etc/nova/nova.conf
  11. sed -i '/^\[keystone_authtoken\]/a username = nova' /etc/nova/nova.conf
  12. sed -i '/^\[keystone_authtoken\]/a project_name = admin' /etc/nova/nova.conf
  13. sed -i '/^\[keystone_authtoken\]/a user_domain_name = Default' /etc/nova/nova.conf
  14. sed -i '/^\[keystone_authtoken\]/a project_domain_name = Default' /etc/nova/nova.conf
  15. sed -i '/^\[keystone_authtoken\]/a auth_type = password' /etc/nova/nova.conf
  16. sed -i '/^\[keystone_authtoken\]/a memcached_servers = controller:11211' /etc/nova/nova.conf
  17. sed -i '/^\[keystone_authtoken\]/a auth_url = http://controller:5000/' /etc/nova/nova.conf
  18. sed -i '/^\[keystone_authtoken\]/a www_authenticate_uri = http://controller:5000/' /etc/nova/nova.conf
  19. sed -i '/^\[vnc\]/a novncproxy_base_url = http://controller:6080/vnc_auto.html' /etc/nova/nova.conf
  20. sed -i '/^\[vnc\]/a server_proxyclient_address = $my_ip' /etc/nova/nova.conf
  21. sed -i '/^\[vnc\]/a server_listen = 0.0.0.0' /etc/nova/nova.conf
  22. sed -i '/^\[vnc\]/a enabled = true' /etc/nova/nova.conf
  23. sed -i '/^\[glance\]/a api_servers = http://controller:9292' /etc/nova/nova.conf
  24. sed -i '/^\[oslo_concurrency\]/a lock_path = /var/lib/nova/tmp' /etc/nova/nova.conf
  25. sed -i '/^\[placement\]/a password = placement123' /etc/nova/nova.conf
  26. sed -i '/^\[placement\]/a username = placement' /etc/nova/nova.conf
  27. sed -i '/^\[placement\]/a auth_url = http://controller:5000/v3' /etc/nova/nova.conf
  28. sed -i '/^\[placement\]/a user_domain_name = Default' /etc/nova/nova.conf
  29. sed -i '/^\[placement\]/a auth_type = password' /etc/nova/nova.conf
  30. sed -i '/^\[placement\]/a project_name = admin' /etc/nova/nova.conf
  31. sed -i '/^\[placement\]/a project_domain_name = Default' /etc/nova/nova.conf
  32. sed -i '/^\[placement\]/a region_name = RegionOne' /etc/nova/nova.conf
    sed -i '/^#vif_plugging_is_fatal/a vif_plugging_is_fatal=false' /etc/nova/nova.conf
    sed -i '/^#vif_plugging_timeout/a vif_plugging_timeout=0' /etc/nova/nova.conf
    systemctl enable libvirtd.service openstack-nova-compute.service;systemctl restart libvirtd.service openstack-nova-compute.service

neutron

controller node

1.条件设置

  1. CREATE DATABASE neutron;
  2. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
  3. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
    openstack user create --domain default --password neutron123 neutron
    openstack role add --project admin --user neutron admin
    openstack service create --name neutron --description "OpenStack Compute" network
    openstack endpoint create --region RegionOne network public http://controller:9696
  4. openstack endpoint create --region RegionOne network internal http://controller:9696
  5. openstack endpoint create --region RegionOne network admin http://controller:9696

2.安装并配置组件

1.安装软件包

  1. yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

2.配置服务组件(/etc/neutron/neutron.conf)

配置数据库访问

  1. [database]
  2. # ...
  3. connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

启用ML2插件

  1. [DEFAULT]
  2. # ...
  3. core_plugin = ml2
  4. service_plugins = router
  5. allow_overlapping_ips = true

配置rabbitmq

  1. [DEFAULT]
  2. # ...
  3. transport_url = rabbit://openstack:RABBIT_PASS@controller

配置keystone访问

  1. [DEFAULT]
  2. # ...
  3. auth_strategy = keystone
  4.  
  5. [keystone_authtoken]
  6. # ...
  7. www_authenticate_uri = http://controller:5000
  8. auth_url = http://controller:5000
  9. memcached_servers = controller:11211
  10. auth_type = password
  11. project_domain_name = default
  12. user_domain_name = default
  13. project_name = service
  14. username = neutron
  15. password = NEUTRON_PASS

配置网络服务来通知计算节点的网络拓扑变化

  1. [DEFAULT]
  2. # ...
  3. notify_nova_on_port_status_changes = true
  4. notify_nova_on_port_data_changes = true
  5.  
  6. [nova]
  7. # ...
  8. auth_url = http://controller:5000
  9. auth_type = password
  10. project_domain_name = default
  11. user_domain_name = default
  12. region_name = RegionOne
  13. project_name = service
  14. username = nova
  15. password = NOVA_PASS

配置锁路径

  1. [oslo_concurrency]
  2. # ...
  3. lock_path = /var/lib/neutron/tmp

3.配置ml2插件(/etc/neutron/plugins/ml2/ml2_conf.ini)

启用flat,VLAN以及VXLAN网络

  1. [ml2]
  2. # ...
  3. type_drivers = flat,vlan,vxlan

启用VXLAN私有网络

  1. [ml2]
  2. # ...
  3. tenant_network_types = vxlan

启用Linuxbridge和l2机制

  1. [ml2]
  2. # ...
  3. mechanism_drivers = linuxbridge,l2population

启用端口安全扩展驱动

  1. [ml2]
  2. # ...
  3. extension_drivers = port_security

配置公共虚拟网络为flat网络

  1. [ml2_type_flat]
  2. # ...
  3. flat_networks = provider

为私有网络配置VXLAN范围

  1. [ml2_type_vxlan]
  2. # ...
  3. vni_ranges = 1:1000

启用 ipset 增加安全组的方便性

  1. [securitygroup]
  2. # ...
  3. enable_ipset = true

4.配置linuxbridge代理(/etc/neutron/plugins/ml2/linuxbridge_agent.ini)

  1. net.bridge.bridge-nf-call-iptables = 1
  2. net.bridge.bridge-nf-call-ip6tables = 1

将公共虚拟网络和公共物理网络接口映射

  1. [linux_bridge]
  2. physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population

  1. [vxlan]
  2. enable_vxlan = true
  3. local_ip = OVERLAY_INTERFACE_IP_ADDRESS
  4. l2_population = true

启用安全组并配置 Linux 桥接 iptables 防火墙驱动

  1. [securitygroup]
  2. # ...
  3. enable_security_group = true
  4. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

5.配置L3代理(/etc/neutron/l3_agent.ini)

配置Linuxbridge接口驱动和外部网络网桥

  1. [DEFAULT]
  2. # ...
  3. interface_driver = linuxbridge

6.配置dhcp代理(/etc/neutron/dhcp_agent.ini)

配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据

  1. [DEFAULT]
  2. # ...
  3. interface_driver = linuxbridge
  4. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  5. enable_isolated_metadata = true

7.配置元数据代理(/etc/neutron/metadata_agent.ini)

  1. [DEFAULT]
  2. # ...
  3. nova_metadata_host = controller
  4. metadata_proxy_shared_secret = METADATA_SECRET

8.在nova(/etc/nova/nova.conf)中配置neutron keystone访问(计算使用网络服务)

  1. [neutron]
  2. # ...
  3. auth_url = http://controller:5000
  4. auth_type = password
  5. project_domain_name = default
  6. user_domain_name = default
  7. region_name = RegionOne
  8. project_name = service
  9. username = neutron
  10. password = NEUTRON_PASS
  11. service_metadata_proxy = true
  12. metadata_proxy_shared_secret = METADATA_SECRET

9.初始化数据库

  1. ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  2. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

10.启动服务

  1. systemctl restart openstack-nova-api.service
  2. systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  3. systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  4. systemctl enable neutron-l3-agent.service;systemctl restart neutron-l3-agent.service

(14)code

  1. mysql -u root -p1234qwer
  2. CREATE DATABASE neutron;
  3. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
  4. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
  5. quit
  6. . /root/admin-openrc
  7. openstack user create --domain default --password neutron123 neutron
  8. openstack role add --project admin --user neutron admin
  9. openstack service create --name neutron --description "OpenStack Compute" network
  10. openstack endpoint create --region RegionOne network public http://controller:9696
  11. openstack endpoint create --region RegionOne network internal http://controller:9696
  12. openstack endpoint create --region RegionOne network admin http://controller:9696
  13. yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
  14. sed -i '/^\[DEFAULT\]/a notify_nova_on_port_data_changes = true' /etc/neutron/neutron.conf
  15. sed -i '/^\[DEFAULT\]/a notify_nova_on_port_status_changes = true' /etc/neutron/neutron.conf
  16. sed -i '/^\[DEFAULT\]/a auth_strategy = keystone' /etc/neutron/neutron.conf
  17. sed -i '/^\[DEFAULT\]/a transport_url = rabbit://openstack:RABBIT_PASS@controller' /etc/neutron/neutron.conf
  18. sed -i '/^\[DEFAULT\]/a allow_overlapping_ips = true' /etc/neutron/neutron.conf
  19. sed -i '/^\[DEFAULT\]/a service_plugins = router' /etc/neutron/neutron.conf
  20. sed -i '/^\[DEFAULT\]/a core_plugin = ml2' /etc/neutron/neutron.conf
  21. sed -i '/^\[database\]/a connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron' /etc/neutron/neutron.conf
  22. sed -i '/^\[keystone_authtoken\]/a password = neutron123' /etc/neutron/neutron.conf
  23. sed -i '/^\[keystone_authtoken\]/a username = neutron' /etc/neutron/neutron.conf
  24. sed -i '/^\[keystone_authtoken\]/a project_name = admin' /etc/neutron/neutron.conf
  25. sed -i '/^\[keystone_authtoken\]/a user_domain_name = Default' /etc/neutron/neutron.conf
  26. sed -i '/^\[keystone_authtoken\]/a project_domain_name = Default' /etc/neutron/neutron.conf
  27. sed -i '/^\[keystone_authtoken\]/a auth_type = password' /etc/neutron/neutron.conf
  28. sed -i '/^\[keystone_authtoken\]/a memcached_servers = controller:11211' /etc/neutron/neutron.conf
  29. sed -i '/^\[keystone_authtoken\]/a auth_url = http://controller:5000/' /etc/neutron/neutron.conf
  30. sed -i '/^\[keystone_authtoken\]/a www_authenticate_uri = http://controller:5000/' /etc/neutron/neutron.conf
  31. sed -i '/^\[oslo_concurrency\]/a lock_path = /var/lib/neutron/tmp' /etc/neutron/neutron.conf
  32. echo '[nova]' >> /etc/neutron/neutron.conf
  33. sed -i '/^\[nova\]/a password = nova123' /etc/neutron/neutron.conf
  34. sed -i '/^\[nova\]/a username = nova' /etc/neutron/neutron.conf
  35. sed -i '/^\[nova\]/a project_name = admin' /etc/neutron/neutron.conf
  36. sed -i '/^\[nova\]/a region_name = RegionOne' /etc/neutron/neutron.conf
  37. sed -i '/^\[nova\]/a user_domain_name = default' /etc/neutron/neutron.conf
  38. sed -i '/^\[nova\]/a project_domain_name = default' /etc/neutron/neutron.conf
  39. sed -i '/^\[nova\]/a auth_type = password' /etc/neutron/neutron.conf
  40. sed -i '/^\[nova\]/a auth_url = http://controller:5000' /etc/neutron/neutron.conf
  41. echo '[ml2]' >> /etc/neutron/plugins/ml2/ml2_conf.ini
  42. sed -i '/^\[ml2\]/a extension_drivers = port_security' /etc/neutron/plugins/ml2/ml2_conf.ini
  43. sed -i '/^\[ml2\]/a mechanism_drivers = linuxbridge,l2population' /etc/neutron/plugins/ml2/ml2_conf.ini
  44. sed -i '/^\[ml2\]/a tenant_network_types = vxlan' /etc/neutron/plugins/ml2/ml2_conf.ini
  45. sed -i '/^\[ml2\]/a type_drivers = flat,vlan,vxlan' /etc/neutron/plugins/ml2/ml2_conf.ini
  46. echo '[ml2_type_flat]' >> /etc/neutron/plugins/ml2/ml2_conf.ini
  47. sed -i '/^\[ml2_type_flat\]/a flat_networks = provider' /etc/neutron/plugins/ml2/ml2_conf.ini
  48. echo '[ml2_type_vxlan]' >> /etc/neutron/plugins/ml2/ml2_conf.ini
  49. sed -i '/^\[ml2_type_vxlan\]/a vni_ranges = 1:1000' /etc/neutron/plugins/ml2/ml2_conf.ini
  50. echo '[securitygroup]' >> /etc/neutron/plugins/ml2/ml2_conf.ini
  51. sed -i '/^\[securitygroup\]/a enable_ipset = true' /etc/neutron/plugins/ml2/ml2_conf.ini
  52. echo '[linux_bridge]' >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  53. sed -i '/^\[linux_bridge\]/a physical_interface_mappings = provider:ens32' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  54. echo '[vxlan]' >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  55. sed -i '/^\[vxlan\]/a l2_population = true' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  56. sed -i '/^\[vxlan\]/a local_ip = 10.1.10.151' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  57. sed -i '/^\[vxlan\]/a enable_vxlan = true' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  58. echo '[securitygroup]' >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  59. sed -i '/^\[securitygroup\]/a firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  60. sed -i '/^\[securitygroup\]/a enable_security_group = true' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  61. echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
  62. echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
  63. modprobe br_netfilter
  64. /sbin/sysctl -p
  65. sed -i '/^\[DEFAULT\]/a interface_driver = linuxbridge' /etc/neutron/l3_agent.ini
  66. sed -i '/^\[DEFAULT\]/a interface_driver = linuxbridge' /etc/neutron/dhcp_agent.ini
  67. sed -i '/^\[DEFAULT\]/a dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq' /etc/neutron/dhcp_agent.ini
  68. sed -i '/^\[DEFAULT\]/a enable_isolated_metadata = true' /etc/neutron/dhcp_agent.ini
  69. sed -i '/^\[DEFAULT\]/a metadata_proxy_shared_secret = metadata123' /etc/neutron/metadata_agent.ini
  70. sed -i '/^\[DEFAULT\]/a nova_metadata_host = controller' /etc/neutron/metadata_agent.ini
  71. sed -i '/^\[neutron\]/a metadata_proxy_shared_secret = metadata123' /etc/nova/nova.conf
  72. sed -i '/^\[neutron\]/a service_metadata_proxy = true' /etc/nova/nova.conf
  73. sed -i '/^\[neutron\]/a password = neutron123' /etc/nova/nova.conf
  74. sed -i '/^\[neutron\]/a username = neutron' /etc/nova/nova.conf
  75. sed -i '/^\[neutron\]/a project_name = admin' /etc/nova/nova.conf
  76. sed -i '/^\[neutron\]/a region_name = RegionOne' /etc/nova/nova.conf
  77. sed -i '/^\[neutron\]/a user_domain_name = default' /etc/nova/nova.conf
  78. sed -i '/^\[neutron\]/a project_domain_name = default' /etc/nova/nova.conf
  79. sed -i '/^\[neutron\]/a auth_type = password' /etc/nova/nova.conf
  80. sed -i '/^\[neutron\]/a auth_url = http://controller:5000' /etc/nova/nova.conf
  81. ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  82. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  83. systemctl restart openstack-nova-api.service
  84. systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  85. systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  86. systemctl enable neutron-l3-agent.service;systemctl restart neutron-l3-agent.service

3.验证

验证:openstack network agent list  ##查看代理状态

  1. openstack network agent list

computer node

1.安装并配置组件

1.安装软件包

  1. yum install openstack-neutron-linuxbridge ebtables ipset -y

2.配置服务组件(/etc/neutron/neutron.conf)

配置rabbitmq

  1. [DEFAULT]
  2. # ...
  3. transport_url = rabbit://openstack:RABBIT_PASS@controller

配置keystone访问

  1. [DEFAULT]
  2. # ...
  3. auth_strategy = keystone
  4.  
  5. [keystone_authtoken]
  6. # ...
  7. www_authenticate_uri = http://controller:5000
  8. auth_url = http://controller:5000
  9. memcached_servers = controller:11211
  10. auth_type = password
  11. project_domain_name = default
  12. user_domain_name = default
  13. project_name = service
  14. username = neutron
  15. password = NEUTRON_PASS

配置锁路径

  1. [oslo_concurrency]
  2. # ...
  3. lock_path = /var/lib/neutron/tmp

3.配置linuxbridge代理(/etc/neutron/plugins/ml2/linuxbridge_agent.ini)

  1. net.bridge.bridge-nf-call-iptables = 1
  2. net.bridge.bridge-nf-call-ip6tables = 1

将公共虚拟网络和公共物理网络接口映射

  1. [linux_bridge]
  2. physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population

  1. [vxlan]
  2. enable_vxlan = true
  3. local_ip = OVERLAY_INTERFACE_IP_ADDRESS
  4. l2_population = true

启用安全组并配置 Linux 桥接 iptables 防火墙驱动

  1. [securitygroup]
  2. # ...
  3. enable_security_group = true
  4. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

4.在nova(/etc/nova/nova.conf)中配置neutron keystone访问(计算使用网络服务)

  1. [neutron]
  2. # ...
  3. auth_url = http://controller:5000
  4. auth_type = password
  5. project_domain_name = default
  6. user_domain_name = default
  7. region_name = RegionOne
  8. project_name = service
  9. username = neutron
  10. password = NEUTRON_PASS
  11. service_metadata_proxy = true
  12. metadata_proxy_shared_secret = METADATA_SECRET

(15)code

  1. yum install openstack-neutron-linuxbridge ebtables ipset -y
  2. sed -i '/^\[DEFAULT\]/a transport_url = rabbit://openstack:RABBIT_PASS@controller' /etc/neutron/neutron.conf
  3. sed -i '/^\[DEFAULT\]/a auth_strategy = keystone' /etc/neutron/neutron.conf
  4. sed -i '/^\[keystone_authtoken\]/a password = neutron123' /etc/neutron/neutron.conf
  5. sed -i '/^\[keystone_authtoken\]/a username = neutron' /etc/neutron/neutron.conf
  6. sed -i '/^\[keystone_authtoken\]/a project_name = admin' /etc/neutron/neutron.conf
  7. sed -i '/^\[keystone_authtoken\]/a user_domain_name = Default' /etc/neutron/neutron.conf
  8. sed -i '/^\[keystone_authtoken\]/a project_domain_name = Default' /etc/neutron/neutron.conf
  9. sed -i '/^\[keystone_authtoken\]/a auth_type = password' /etc/neutron/neutron.conf
  10. sed -i '/^\[keystone_authtoken\]/a memcached_servers = controller:11211' /etc/neutron/neutron.conf
  11. sed -i '/^\[keystone_authtoken\]/a auth_url = http://controller:5000/' /etc/neutron/neutron.conf
  12. sed -i '/^\[keystone_authtoken\]/a www_authenticate_uri = http://controller:5000/' /etc/neutron/neutron.conf
  13. sed -i '/^\[oslo_concurrency\]/a lock_path = /var/lib/neutron/tmp' /etc/neutron/neutron.conf
  14. echo '[linux_bridge]' >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  15. sed -i '/^\[linux_bridge\]/a physical_interface_mappings = provider:ens32' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  16. echo '[vxlan]' >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  17. sed -i '/^\[vxlan\]/a l2_population = true' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  18. sed -i '/^\[vxlan\]/a local_ip = 10.1.10.152' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  19. sed -i '/^\[vxlan\]/a enable_vxlan = true' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  20. echo '[securitygroup]' >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  21. sed -i '/^\[securitygroup\]/a firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  22. sed -i '/^\[securitygroup\]/a enable_security_group = true' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  23. echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
  24. echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
  25. modprobe br_netfilter
  26. /sbin/sysctl -p
  27. sed -i '/^\[neutron\]/a metadata_proxy_shared_secret = metadata123' /etc/nova/nova.conf
  28. sed -i '/^\[neutron\]/a service_metadata_proxy = true' /etc/nova/nova.conf
  29. sed -i '/^\[neutron\]/a password = neutron123' /etc/nova/nova.conf
  30. sed -i '/^\[neutron\]/a username = neutron' /etc/nova/nova.conf
  31. sed -i '/^\[neutron\]/a project_name = admin' /etc/nova/nova.conf
  32. sed -i '/^\[neutron\]/a region_name = RegionOne' /etc/nova/nova.conf
  33. sed -i '/^\[neutron\]/a user_domain_name = default' /etc/nova/nova.conf
  34. sed -i '/^\[neutron\]/a project_domain_name = default' /etc/nova/nova.conf
  35. sed -i '/^\[neutron\]/a auth_type = password' /etc/nova/nova.conf
  36. sed -i '/^\[neutron\]/a auth_url = http://controller:5000' /etc/nova/nova.conf
  37. systemctl restart openstack-nova-compute.service
  38. systemctl enable neutron-linuxbridge-agent.service;systemctl restart neutron-linuxbridge-agent.service

dashboard

1.安装并配置组件

1.安装软件包

  1. yum install openstack-dashboard -y

2.配置服务组件(/etc/openstack-dashboard/local_settings)

配置host地址

  1. OPENSTACK_HOST = "controller"

配置允许访问主机

  1. ALLOWED_HOSTS = ['*', ]

配置 memcached 会话存储服务

  1. SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
  2.  
  3. CACHES = {
  4. 'default': {
  5. 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
  6. 'LOCATION': 'controller:11211',
  7. }
  8. }

启用第3版认证API

  1. OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

启用对域的支持

  1. OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

配置API版本

  1. OPENSTACK_API_VERSIONS = {
  2. "identity": 3,
  3. "image": 2,
  4. "volume": 3,
  5. }

通过仪表盘创建用户时的默认域配置为 default

  1. OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

通过仪表盘创建的用户默认角色配置为 user

  1. OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

3.启动服务

  1. systemctl restart httpd.service memcached.service

(16)code

  1. yum install openstack-dashboard -y
  2. sed -i '/^OPENSTACK_HOST/s/OPENSTACK_HOST/#OPENSTACK_HOST/' /etc/openstack-dashboard/local_settings
  3. sed -i '/^#OPENSTACK_HOST/a OPENSTACK_HOST = "controller"' /etc/openstack-dashboard/local_settings
  4. sed -i '/^ALLOWED_HOSTS/s/ALLOWED_HOSTS/#ALLOWED_HOSTS/' /etc/openstack-dashboard/local_settings
  5. sed -i "/^#ALLOWED_HOSTS/a ALLOWED_HOSTS = ['*', ]" /etc/openstack-dashboard/local_settings
  6. cat <<EOF>> /etc/openstack-dashboard/local_settings
  7. SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
  8.  
  9. CACHES = {
  10. 'default': {
  11. 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
  12. 'LOCATION': 'controller:11211',
  13. }
  14. }
  15. OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  16. OPENSTACK_API_VERSIONS = {
  17. "identity": 3,
  18. "image": 2,
  19. "volume": 3,
  20. }
  21. OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
  22. OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
  23. EOF
  24. echo 'WSGIApplicationGroup %{GLOBAL}' >> /etc/httpd/conf.d/openstack-dashboard.conf
  25. systemctl restart httpd.service memcached.service

dashboard访问异常处理

https://www.cnblogs.com/omgasw/p/11990435.html

lauch instance

1.创建虚拟网络:创建网络

          创建子网

          创建路由器:←添加私网子网接口

                ←添加公有网络网关

2.创建计算方案

3.创建键值对

4.添加安全规则

5.启动实例←计算方案,镜像,网络,安全组,密钥对

OpenStack Train版 简单部署流程的更多相关文章

  1. openstack Train版 “nova-status upgrade check”报错:Forbidden: Forbidden (HTTP 403)

    部署openstack train版,在部署完nova项目时,进行检查,执行 nova-status upgrade check 返回报错信息如下: [root@controller ~]# nova ...

  2. OpenStack Swift集群部署流程与简单使用

    之前介绍了<OpenStack Swift All In One安装部署流程与简单使用>,那么接下来就说一说Swift集群部署吧. 1. 简介 本文档详细描述了使用两台PC部署一个小型Sw ...

  3. OpenStack Train版-1.安装基础环境&服务

    1. 服务组件的密码 密码名称 描述 ADMIN_PASS admin用户密码 CINDER_DBPASS 块设备存储服务的数据库密码 CINDER_PASS 块设备存储服务的 cinder 密码 D ...

  4. OpenStack Train版-14.安装块存储服务cinder(存储节点)

    安装cindoer块存储服务节点(存储节点192.168.0.40)使用默认的LVM卷方法,之后改为ceph存储 安装LVM软件包 [root@cinder01 ~]# yum install lvm ...

  5. OpenStack Train版-10.安装neutron网络服务(网络节点:可选)

    可选:安装neutron网络服务节点(neutron01网络节点192.168.0.30)网络配置按照官网文档的租户自助网络 配置系统参数 echo 'net.ipv4.ip_forward = 1' ...

  6. OpenStack Train版-2.安装keystone身份认证服务

    安装 keystone 认证 mysql -uroot create database keystone; grant all privileges on keystone.* to 'keyston ...

  7. OpenStack Train版-11.安装horizon服务(计算节点)

    OpenStack仪表板Dashboard服务的项目名称是Horizon,它所需的唯一服务是身份服务keystone,开发语言是python的web框架Django. 安装Train版本的Horizo ...

  8. OpenStack kilo版(1) 部署环境

    硬件 VMware workstation虚拟机 Ubuntu14.04操作系统 虚拟机网络规划 管理网络: eth0, 桥接模式 10.0.0.0/24 外部网络: eth1, nat模式(需要关闭 ...

  9. OpenStack Train版-12.创建虚拟网络并启动实例(控制节点)

    使用VMware虚拟机创建网络可能会有不可预测到的故障,可以通过dashboard界面,管理员创建admin用户的网络环境 1.第一种: 建立公共提供商网络在admin管理员用户下创建 source ...

随机推荐

  1. python简单面试题

    在这个即将进入金9银10的跳槽季节的时候,肯定需要一波面试题了,安静总结了一些经常遇到的python面试题,让我们一起撸起来. python面试题 1.求出1-100之间的和 # coidng:utf ...

  2. 01-day-vuex的使用

    知识点1===>简单的使用vuex 进行state取值 使用yarn下载 yarn add vuex -D vuex的包叫做 store 跟pages同级 创建store文件夹,文件夹下有sto ...

  3. Pwn-level1

    题目地址 https://dn.jarvisoj.com/challengefiles/level1.80eacdcd51aca92af7749d96efad7fb5 先看一下文件的类型和保护机制   ...

  4. day76_10_23自定义签发token,其他drf组件

    一.签发token的原理 当认证类authentication_classes是JSONWebTokenAuthentication时,其父类JSONWebTokenAPIView只有post 方法, ...

  5. jdk 自带命令行工具

    jps工具 虚拟机进程状况工具 工具主要选项 jstat: 虚拟机统计信息监视工具 jinfo: Java配置信息工具 jinfo( Configuration Info for Java) 的作用是 ...

  6. Fink| CEP

    什么是复杂事件CEP? 一个或多个由简单事件构成的事件流通过一定的规则匹配,然后输出用户想得到的数据,满足规则的复杂事件. 特征: 目标:从有序的简单事件流中发现一些高阶特征 输入:一个或多个由简单事 ...

  7. Django常用知识整理

    Django 的认识,面试题 1. 对Django的认识? #1.Django是走大而全的方向,它最出名的是其全自动化的管理后台:只需要使用起ORM,做简单的对象定义,它就能自动生成数据库结构.以及全 ...

  8. Mac流程图的软件

    里面有破解机器,按照步骤一步步来就可以了 https://www.zhinin.com/omnigraffle_pro-mac.html

  9. LeetCode 622:设计循环队列 Design Circular Queue

    LeetCode 622:设计循环队列 Design Circular Queue 首先来看看队列这种数据结构: 队列:先入先出的数据结构 在 FIFO 数据结构中,将首先处理添加到队列中的第一个元素 ...

  10. LeetCode 283:移动零 Move Zeroes

    给定一个数组 nums,编写一个函数将所有 0 移动到数组的末尾,同时保持非零元素的相对顺序. Given an array nums, write a function to move all 0' ...