OpenStack私有云部署

Controller Node:       em1(10.6.17.11),em2()

Computer Node:         em1(10.6.17.12),em2()

Block Storage Node:    em1(10.6.17.13)

Object Storage Node:   em1(10.6.17.14)

1. Controller 修改网络配置

em1  =

IPADDR=10.6.17.11

NETMASK=255.255.255.0

GATEWAY=10.6.17.1

em2  =   (HWADDR  与  UUID 不变)

BOOTPROTO=none

ONBOOT=yes

Computer 修改网络配置

em1  =

IPADDR=10.6.17.12

NETMASK=255.255.255.0

GATEWAY=10.6.17.1

em2  =   (HWADDR  与  UUID 不变)

BOOTPROTO=none

ONBOOT=yes

2. 修改节点 主机名 方便后续查看

centos 7 使用如下命令修改:

hostnamectl --static set-hostname <主机名>

-----------------------------------------------------

3. 修改 host 文件  四台主机分别修改

vi /etc/hosts

10.6.17.11 controller

10.6.17.12 computer1

10.6.17.13 block1

10.6.17.14 object1

----------------------------------------------------

修改完毕,使用 ping -c 4 openstack.org   以及    controller   computer1   block1   object1 是否可以通

4. controller 安装ntp服务!

centos 7 使用 chrony 来做时间服务器.

[root@controller ~]#  yum install chrony

##修改配置文件

[root@controller ~]# vi /etc/chrony.conf

server cn.pool.ntp.org iburst

allow 10.6.17.0/24

[root@controller ~]# systemctl enable chronyd.service

[root@controller ~]# systemctl start chronyd.service

[root@controller ~]# systemctl stop firewalld.service

[root@controller ~]# systemctl disable firewalld.service

##其他节点 也分别安装 chrony

yum install chrony

##修改 配置文件

vi /etc/chrony.conf

server controller iburst

systemctl enable chronyd.service

systemctl start chronyd.service

systemctl stop firewalld.service

systemctl disable firewalld.service

##验证时间

[root@controller ~]# chronyc sources

210 Number of sources = 1

MS Name/IP address         Stratum Poll Reach LastRx Last sample

===============================================================================

^* news.neu.edu.cn               2   6    17    44   -795us[ -812us] +/-   31ms

[root@computer1 ~]# chronyc sources

210 Number of sources = 1

MS Name/IP address         Stratum Poll Reach LastRx Last sample

===============================================================================

^* controller                    3   6    17    35  -1007ns[ -300us] +/-   33ms

[root@block1 ~]# chronyc sources

210 Number of sources = 1

MS Name/IP address         Stratum Poll Reach LastRx Last sample

===============================================================================

^* controller                    3   6    37     5     -8ns[ -385us] +/-   30ms

[root@object1 ~]# chronyc sources

210 Number of sources = 1

MS Name/IP address         Stratum Poll Reach LastRx Last sample

===============================================================================

^* controller                    3   6    37     6   -707ns[ -548us] +/-   31ms

5. 添加 yum ELPL 源 以及 openstack-liberty 源

# yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

# yum install centos-release-openstack-liberty

# yum upgrade

# reboot

# yum install python-openstackclient

# yum install openstack-selinux

6. controller 安装 mariadb 数据库

使用 yum 安装,或者使用 源码安装

[root@controller ~]# yum install mariadb mariadb-server MySQL-python

my.conf 中添加如下配置 以支持 utf8

[mysqld]

...

bind-address = 10.6.17.11

[mysqld]

...

default-storage-engine = innodb

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = 'SET NAMES utf8'

character-set-server = utf8

# systemctl enable mariadb.service

# systemctl start mariadb.service

7. controller 安装 rabbitMQ 消息队列

[root@controller ~]# yum install rabbitmq-server

[root@controller ~]# systemctl enable rabbitmq-server.service

[root@controller ~]# systemctl start rabbitmq-server.service

## 添加 openstack 用户 ( RABBIT_PASS 替换为 自己的密码 )

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

...done.

## 授权帐号

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...

...done.

二、 配置 Identity service 服务

## 创建 OpenStack Identity service 的 管理员 帐号

[root@controller ~]# mysql -u root -p

MariaDB [(none)]> CREATE DATABASE keystone;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \

-> IDENTIFIED BY 'keystone';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \

-> IDENTIFIED BY 'keystone';

## 创建 管理员 token

[root@controller ~]# openssl rand -hex 10

d22ea88344b5d4fa864f

## 安装openstack-keystone 组件

[root@controller ~]# yum install openstack-keystone httpd mod_wsgi memcached python-memcached

[root@controller ~]# systemctl enable memcached.service

[root@controller ~]# systemctl start memcached.service

## 编辑 /etc/keystone/keystone.conf 文件 修改 token

[root@controller ~]# vim /etc/keystone/keystone.conf

#admin_token = ADMIN

修改为 刚才生成的 token

admin_token = d22ea88344b5d4fa864f

[database]

...

connection = mysql://keystone:keystone@controller/keystone

[memcache]

...

servers = localhost:11211

[token]

...

provider = uuid

driver = memcache

[revoke]

...

driver = sql

[DEFAULT]

...

verbose = True

## 同步Identity service 数据库

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

提示 No handlers could be found for logger "oslo_config.cfg"   但是查看keystone.log 日志,发现已经同步完成。

## 配置 httpd 文件

[root@controller ~]# vim /etc/httpd/conf/httpd.conf            修改如下:

ServerName controller

## 创建一个 wsgi-keystone.conf 文件

[root@controller ~]# vim /etc/httpd/conf.d/wsgi-keystone.conf        内容如下:

-------------------------------------------------------------------------------------------------------------------------------------

Listen 5000

Listen 35357

<VirtualHost *:5000>

WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

<IfVersion >= 2.4>

ErrorLogFormat "%{cu}t %M"

</IfVersion>

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

<IfVersion >= 2.4>

Require all granted

</IfVersion>

<IfVersion < 2.4>

Order allow,deny

Allow from all

</IfVersion>

</Directory>

</VirtualHost>

<VirtualHost *:35357>

WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

<IfVersion >= 2.4>

ErrorLogFormat "%{cu}t %M"

</IfVersion>

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

<IfVersion >= 2.4>

Require all granted

</IfVersion>

<IfVersion < 2.4>

Order allow,deny

Allow from all

</IfVersion>

</Directory>

</VirtualHost>

------------------------------------------------------------------------------------------------------------------------------------

## 启动 httpd

[root@controller ~]# systemctl enable httpd.service

[root@controller ~]# systemctl start httpd.service

## 配置 身份验证 token:

[root@controller ~]# export OS_TOKEN=d22ea88344b5d4fa864f

[root@controller ~]# export OS_URL=http://controller:35357/v3

[root@controller ~]# export OS_IDENTITY_API_VERSION=3

## 创建API用户

[root@controller ~]# openstack service create \

> --name keystone --description "OpenStack Identity" identity

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Identity               |

| enabled     | True                             |

| id          | 528702ea888749d8b91cdf303cbec285 |

| name        | keystone                         |

| type        | identity                         |

+-------------+----------------------------------+

## 创建 API访问节点

[root@controller ~]# openstack endpoint create --region RegionOne \

> identity public http://controller:5000/v2.0

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 6d1d13e2a74940338337e2faeb404291 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 528702ea888749d8b91cdf303cbec285 |

| service_name | keystone                         |

| service_type | identity                         |

| url          | http://controller:5000/v2.0      |

+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> identity internal http://controller:5000/v2.0

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 1a6e4df9f7fb444f98da5274c4b45916 |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 528702ea888749d8b91cdf303cbec285 |

| service_name | keystone                         |

| service_type | identity                         |

| url          | http://controller:5000/v2.0      |

+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> identity admin http://controller:35357/v2.0

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 08e839319e5143229abc893e413e0dcc |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 528702ea888749d8b91cdf303cbec285 |

| service_name | keystone                         |

| service_type | identity                         |

| url          | http://controller:35357/v2.0     |

+--------------+----------------------------------+

## 创建 admin 项目

[root@controller ~]# openstack project create --domain default \

> --description "Admin Project" admin

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | Admin Project                    |

| domain_id   | default                          |

| enabled     | True                             |

| id          | 0d1a69b17e444cb3b4884529b6bdb372 |

| is_domain   | False                            |

| name        | admin                            |

| parent_id   | None                             |

+-------------+----------------------------------+

## 创建 管理员 用户

[root@controller ~]# openstack user create --domain default \

> --password-prompt admin

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     | Value                            |

+-----------+----------------------------------+

| domain_id | default                          |

| enabled   | True                             |

| id        | 28c24be804a649cb825a8349ca3e6ce3 |

| name      | admin                            |

+-----------+----------------------------------+

## 创建 管理员 role

[root@controller ~]# openstack role create admin

+-------+----------------------------------+

| Field | Value                            |

+-------+----------------------------------+

| id    | c378be38e9574421a54d90e62ee2d0aa |

| name  | admin                            |

+-------+----------------------------------+

## 关联 管理员用户 到 项目与role 中

[root@controller ~]# openstack role add --project admin --user admin admin

## 创建 service 项目

[root@controller ~]# openstack project create --domain default \

> --description "Service Project" service

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | Service Project                  |

| domain_id   | default                          |

| enabled     | True                             |

| id          | 90b44e1c63104b92a1fadccc2258bbd1 |

| is_domain   | False                            |

| name        | service                          |

| parent_id   | None                             |

+-------------+----------------------------------+

## 验证账户

编辑 /usr/share/keystone/keystone-dist-paste.ini 文件

[root@controller ~]# vim /usr/share/keystone/keystone-dist-paste.ini

[pipeline:public_api]

[pipeline:admin_api]

[pipeline:api_v3]

中的

admin_token_auth   删除

## 清除环境变量

[root@controller ~]# unset OS_TOKEN OS_URL

## 以管理员身份 创建一个 token

[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \

> --os-project-domain-id default --os-user-domain-id default \

> --os-project-name admin --os-username admin --os-auth-type password \

> token issue

Password:

+------------+----------------------------------+

| Field      | Value                            |

+------------+----------------------------------+

| expires    | 2015-12-03T04:03:34.754196Z      |

| id         | cba3fce8d9dc4bc7956f3b4aa566051c |

| project_id | 0d1a69b17e444cb3b4884529b6bdb372 |

| user_id    | 28c24be804a649cb825a8349ca3e6ce3 |

+------------+----------------------------------+

## 创建完成,表示验证通过

## 创建一个 客户端 脚本,方便操作

[root@controller ~]# vim admin-openrc.sh

添加如下环境变量:

export OS_PROJECT_DOMAIN_ID=default

export OS_USER_DOMAIN_ID=default

export OS_PROJECT_NAME=admin

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

## 导入 脚本,并测试

[root@controller ~]# source admin-openrc.sh

[root@controller ~]# openstack token issue

+------------+----------------------------------+

| Field      | Value                            |

+------------+----------------------------------+

| expires    | 2015-12-03T04:07:26.391998Z      |

| id         | 24a890cc24b443dc9658a34aba8462d8 |

| project_id | 0d1a69b17e444cb3b4884529b6bdb372 |

| user_id    | 28c24be804a649cb825a8349ca3e6ce3 |

+------------+----------------------------------+

## 验证通过

三、安装配置 image server

## 创建并导入数据库

[root@controller ~]# mysql -u root -p

Enter password:

MariaDB [(none)]> CREATE DATABASE glance;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \

-> IDENTIFIED BY 'glance';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \

-> IDENTIFIED BY 'glance';

## 导入 脚本 创建用户

[root@controller ~]# source admin-openrc.sh

## 创建 glance  用户

[root@controller ~]# openstack user create --domain default --password-prompt glance

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     | Value                            |

+-----------+----------------------------------+

| domain_id | default                          |

| enabled   | True                             |

| id        | 19121c440d77403e8b66f197381c68e9 |

| name      | glance                           |

+-----------+----------------------------------+

## 关联到 项目 与 admin 用户

[root@controller ~]# openstack role add --project service --user glance admin

## 创建 glance 服务

[root@controller ~]# openstack service create --name glance \

> --description "OpenStack Image service" image

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Image service          |

| enabled     | True                             |

| id          | bb8f3d6386cb423eb682147cf2b5ab92 |

| name        | glance                           |

| type        | image                            |

+-------------+----------------------------------+

## 创建 image API

[root@controller ~]# openstack endpoint create --region RegionOne \

> image public http://controller:9292

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | ac858877305542679883df3afb974a95 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | bb8f3d6386cb423eb682147cf2b5ab92 |

| service_name | glance                           |

| service_type | image                            |

| url          | http://controller:9292           |

+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> image internal http://controller:9292

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 290dc30efb6b45c9bb654252aaf8b878 |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | bb8f3d6386cb423eb682147cf2b5ab92 |

| service_name | glance                           |

| service_type | image                            |

| url          | http://controller:9292           |

+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> image admin http://controller:9292

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 04fac23ce557445d92d77ca53cf4c620 |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | bb8f3d6386cb423eb682147cf2b5ab92 |

| service_name | glance                           |

| service_type | image                            |

| url          | http://controller:9292           |

+--------------+----------------------------------+

## 安装配置 glance 相关软件

[root@controller ~]# yum install openstack-glance python-glance python-glanceclient -y

## 编辑 /etc/glance/glance-api.conf 配置文件

[root@controller ~]# vim /etc/glance/glance-api.conf

[database]

...

connection = mysql://glance:glance@controller/glance

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = glance

password = glance

[paste_deploy]

...

flavor = keystone

[glance_store]

...

default_store = file

filesystem_store_datadir = /opt/glance/images/

[DEFAULT]

...

notification_driver = noop

[DEFAULT]

...

verbose = True

## 编辑 /etc/glance/glance-registry.conf 配置文件

[root@controller ~]# vim /etc/glance/glance-registry.conf

[database]

...

connection = mysql://glance:glance@controller/glance

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = glance

password = glance

[paste_deploy]

...

flavor = keystone

[DEFAULT]

...

notification_driver = noop

[DEFAULT]

...

verbose = True

## 同步 glance 数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

## 启动服务

[root@controller ~]# systemctl enable openstack-glance-api.service \

> openstack-glance-registry.service

[root@controller ~]# systemctl start openstack-glance-api.service \

> openstack-glance-registry.service

## 验证 glance 服务

添加一个环境变量到 脚本中

[root@controller ~]# echo "export OS_IMAGE_API_VERSION=2" \

> | tee -a admin-openrc.sh

[root@controller ~]# source admin-openrc.sh

## 下载一个测试镜像

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

## 上传镜像到 服务中

[root@controller ~]# glance image-create --name "cirros" \

> --file cirros-0.3.4-x86_64-disk.img \

> --disk-format qcow2 --container-format bare \

> --visibility public --progress

[=============================>] 100%

+------------------+--------------------------------------+

| Property         | Value                                |

+------------------+--------------------------------------+

| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |

| container_format | bare                                 |

| created_at       | 2015-12-03T05:05:33Z                 |

| disk_format      | qcow2                                |

| id               | 3846ec1a-ad85-4b4d-88d9-17f2374ed41d |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | cirros                               |

| owner            | 0d1a69b17e444cb3b4884529b6bdb372     |

| protected        | False                                |

| size             | 13287936                             |

| status           | active                               |

| tags             | []                                   |

| updated_at       | 2015-12-03T05:05:33Z                 |

| virtual_size     | None                                 |

| visibility       | public                               |

+------------------+--------------------------------------+

[root@controller ~]# glance image-list

+--------------------------------------+--------+

| ID                                   | Name   |

+--------------------------------------+--------+

| 3846ec1a-ad85-4b4d-88d9-17f2374ed41d | cirros |

+--------------------------------------+--------+

四、安装配置 Computer server

## 添加数据库 与 用户

[root@controller ~]# mysql -u root -p

Enter password:

MariaDB [(none)]> CREATE DATABASE nova;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \

-> IDENTIFIED BY 'nova';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \

-> IDENTIFIED BY 'nova';

## 创建 nova 用户

[root@controller ~]# source admin-openrc.sh

[root@controller ~]# openstack user create --domain default --password-prompt nova

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     | Value                            |

+-----------+----------------------------------+

| domain_id | default                          |

| enabled   | True                             |

| id        | 05dfb9b8091b4d708455281540a030d1 |

| name      | nova                             |

+-----------+----------------------------------+

## 关联 到 role

[root@controller ~]# openstack role add --project service --user nova admin

## 创建 nova service

[root@controller ~]# openstack service create --name nova \

> --description "OpenStack Compute" compute

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Compute                |

| enabled     | True                             |

| id          | 1f73377bd0bc4f208ca6c904a71e6279 |

| name        | nova                             |

| type        | compute                          |

+-------------+----------------------------------+

## 创建 Compute service API

[root@controller ~]# openstack endpoint create --region RegionOne \

> compute public http://controller:8774/v2/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | b545b7bd81ed4b64b559b905dd3b4532        |

| interface    | public                                  |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 1f73377bd0bc4f208ca6c904a71e6279        |

| service_name | nova                                    |

| service_type | compute                                 |

| url          | http://controller:8774/v2/%(tenant_id)s |

+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> compute internal http://controller:8774/v2/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | 5b23e9f26e094f8f99a4416a085b6fc8        |

| interface    | internal                                |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 1f73377bd0bc4f208ca6c904a71e6279        |

| service_name | nova                                    |

| service_type | compute                                 |

| url          | http://controller:8774/v2/%(tenant_id)s |

+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> compute admin http://controller:8774/v2/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | 77638cfde23049cd9bf1c06347385562        |

| interface    | admin                                   |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 1f73377bd0bc4f208ca6c904a71e6279        |

| service_name | nova                                    |

| service_type | compute                                 |

| url          | http://controller:8774/v2/%(tenant_id)s |

+--------------+-----------------------------------------+

##  controller 安装软件

[root@computer1 ~]# yum install openstack-nova-api openstack-nova-cert \

> openstack-nova-conductor openstack-nova-console \

> openstack-nova-novncproxy openstack-nova-scheduler \

> python-novaclient

## 编辑 /etc/nova/nova.conf  配置文件

[database]

...

connection = mysql://nova:nova@controller/nova

[DEFAULT]

...

rpc_backend = rabbit

my_ip = 10.6.17.11

auth_strategy = keystone

network_api_class = nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

enabled_apis=osapi_compute,metadata

verbose = True

[oslo_messaging_rabbit]

...

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = openstack

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = nova

[glance]

...

host = controller

[oslo_concurrency]

...

lock_path = /var/lib/nova/tmp

## 同步数据库

su -s /bin/sh -c "nova-manage db sync" nova

## 启动服务

[root@controller ~]# systemctl enable openstack-nova-api.service \

> openstack-nova-cert.service openstack-nova-consoleauth.service \

> openstack-nova-scheduler.service openstack-nova-conductor.service \

> openstack-nova-novncproxy.service

[root@controller ~]# systemctl start openstack-nova-api.service \

> openstack-nova-cert.service openstack-nova-consoleauth.service \

> openstack-nova-scheduler.service openstack-nova-conductor.service \

> openstack-nova-novncproxy.service

## 安装配置 计算机节点 Compute 节点

yum install openstack-nova-compute sysfsutils

## 编辑 /etc/nova/nova.conf 配置文件

[DEFAULT]

...

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.6.17.12

network_api_class = nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

verbose = True

[oslo_messaging_rabbit]

...

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = openstack

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = nova

[glance]

...

host = controller

[oslo_concurrency]

...

lock_path = /var/lib/nova/tmp

## 查看否支持 虚拟化 大于0 表示支持, 大于0 不需要添加配置支持

[root@computer1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo

8

## 如果 硬件不支持虚拟化 则添加虚拟化支持

[root@computer1 ~]# vim /etc/nova/nova.conf

[libvirt]

...

virt_type = qemu

## 启动服务

[root@computer1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service

[root@computer1 ~]# systemctl start libvirtd.service openstack-nova-compute.service

## 验证 服务

[root@controller ~]# nova service-list

+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

| 1  | nova-consoleauth | controller | internal | enabled | up    | 2015-12-03T07:06:27.000000 | -               |

| 2  | nova-conductor   | controller | internal | enabled | up    | 2015-12-03T07:06:27.000000 | -               |

| 5  | nova-cert        | controller | internal | enabled | up    | 2015-12-03T07:06:27.000000 | -               |

| 6  | nova-scheduler   | controller | internal | enabled | up    | 2015-12-03T07:06:27.000000 | -               |

| 7  | nova-compute     | computer1  | nova     | enabled | up    | 2015-12-03T07:06:31.000000 | -               |

+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

[root@controller ~]# nova endpoints

+-----------+------------------------------------------------------------+

| nova      | Value                                                      |

+-----------+------------------------------------------------------------+

| id        | 5b23e9f26e094f8f99a4416a085b6fc8                           |

| interface | internal                                                   |

| region    | RegionOne                                                  |

| region_id | RegionOne                                                  |

| url       | http://controller:8774/v2/0d1a69b17e444cb3b4884529b6bdb372 |

+-----------+------------------------------------------------------------+

+-----------+------------------------------------------------------------+

| nova      | Value                                                      |

+-----------+------------------------------------------------------------+

| id        | 77638cfde23049cd9bf1c06347385562                           |

| interface | admin                                                      |

| region    | RegionOne                                                  |

| region_id | RegionOne                                                  |

| url       | http://controller:8774/v2/0d1a69b17e444cb3b4884529b6bdb372 |

+-----------+------------------------------------------------------------+

+-----------+------------------------------------------------------------+

| nova      | Value                                                      |

+-----------+------------------------------------------------------------+

| id        | b545b7bd81ed4b64b559b905dd3b4532                           |

| interface | public                                                     |

| region    | RegionOne                                                  |

| region_id | RegionOne                                                  |

| url       | http://controller:8774/v2/0d1a69b17e444cb3b4884529b6bdb372 |

+-----------+------------------------------------------------------------+

+-----------+----------------------------------+

| keystone  | Value                            |

+-----------+----------------------------------+

| id        | 08e839319e5143229abc893e413e0dcc |

| interface | admin                            |

| region    | RegionOne                        |

| region_id | RegionOne                        |

| url       | http://controller:35357/v2.0     |

+-----------+----------------------------------+

+-----------+----------------------------------+

| keystone  | Value                            |

+-----------+----------------------------------+

| id        | 1a6e4df9f7fb444f98da5274c4b45916 |

| interface | internal                         |

| region    | RegionOne                        |

| region_id | RegionOne                        |

| url       | http://controller:5000/v2.0      |

+-----------+----------------------------------+

+-----------+----------------------------------+

| keystone  | Value                            |

+-----------+----------------------------------+

| id        | 6d1d13e2a74940338337e2faeb404291 |

| interface | public                           |

| region    | RegionOne                        |

| region_id | RegionOne                        |

| url       | http://controller:5000/v2.0      |

+-----------+----------------------------------+

+-----------+----------------------------------+

| glance    | Value                            |

+-----------+----------------------------------+

| id        | 04fac23ce557445d92d77ca53cf4c620 |

| interface | admin                            |

| region    | RegionOne                        |

| region_id | RegionOne                        |

| url       | http://controller:9292           |

+-----------+----------------------------------+

+-----------+----------------------------------+

| glance    | Value                            |

+-----------+----------------------------------+

| id        | 290dc30efb6b45c9bb654252aaf8b878 |

| interface | internal                         |

| region    | RegionOne                        |

| region_id | RegionOne                        |

| url       | http://controller:9292           |

+-----------+----------------------------------+

+-----------+----------------------------------+

| glance    | Value                            |

+-----------+----------------------------------+

| id        | ac858877305542679883df3afb974a95 |

| interface | public                           |

| region    | RegionOne                        |

| region_id | RegionOne                        |

| url       | http://controller:9292           |

+-----------+----------------------------------+

[root@controller ~]# nova image-list

+--------------------------------------+--------+--------+--------+

| ID                                   | Name   | Status | Server |

+--------------------------------------+--------+--------+--------+

| 3846ec1a-ad85-4b4d-88d9-17f2374ed41d | cirros | ACTIVE |        |

+--------------------------------------+--------+--------+--------+

五、安装配置 Network server

## 添加数据库、用户

[root@controller ~]# mysql -u root -p

Enter password:

MariaDB [(none)]> CREATE DATABASE neutron;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \

-> IDENTIFIED BY 'neutron';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \

-> IDENTIFIED BY 'neutron';

## 添加 openstack 用户

[root@controller ~]# openstack user create --domain default --password-prompt neutron

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     | Value                            |

+-----------+----------------------------------+

| domain_id | default                          |

| enabled   | True                             |

| id        | 04b2950e0b96451494a4391c5c7bcd2e |

| name      | neutron                          |

+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user neutron admin

## 添加 neutron service

[root@controller ~]# openstack service create --name neutron \

> --description "OpenStack Networking" network

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Networking             |

| enabled     | True                             |

| id          | b9ca3cbea4df4966bbfff153e676d404 |

| name        | neutron                          |

| type        | network                          |

+-------------+----------------------------------+

## 添加 network server API

[root@controller ~]# openstack endpoint create --region RegionOne \

> network public http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | e741a15b8f4f424d8ac605083d079911 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | b9ca3cbea4df4966bbfff153e676d404 |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> network internal http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | ba9df6bd296f4f42bd24588b292ed54b |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | b9ca3cbea4df4966bbfff153e676d404 |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> network admin http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | ba86430058a243da8c50ab9d42178eab |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | b9ca3cbea4df4966bbfff153e676d404 |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

## 安装配置 network 软件  ( 我这里选择的是 Networking Option 1: Provider networks  这种网络方式 )

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 \

> openstack-neutron-linuxbridge python-neutronclient ebtables ipset

## 修改配置文件 /etc/neutron/neutron.conf

[database]

...

connection = mysql://neutron:neutron@controller/neutron

[DEFAULT]

...

core_plugin = ml2

service_plugins =

rpc_backend = rabbit

auth_strategy = keystone

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

verbose = True

[oslo_messaging_rabbit]

...

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = openstack

## 检查 [keystone_authtoken] 下面配置,注释掉其他所有的选项只使用下面

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = neutron

[nova]

...

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = nova

password = nova

[oslo_concurrency]

...

lock_path = /var/lib/neutron/tmp

##  配置 Layer 2 (ML2) 插件

编辑 /etc/neutron/plugins/ml2/ml2_conf.ini  配置文件

[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

...

type_drivers = flat,vlan

tenant_network_types =

mechanism_drivers = linuxbridge

extension_drivers = port_security

[ml2_type_flat]

...

flat_networks = public

[securitygroup]

...

enable_ipset = True

## 配置Linux bridge

编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini  配置文件

[linux_bridge]

physical_interface_mappings = public:em2

[vxlan]

enable_vxlan = False

[agent]

...

prevent_arp_spoofing = True

[securitygroup]

...

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

## 配置 DHCP 服务

编辑 /etc/neutron/dhcp_agent.ini 配置文件

[DEFAULT]

...

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = True

verbose = True

## 配置 metadata 服务

编辑 /etc/neutron/metadata_agent.ini 配置文件  注释掉 [DEFAULT] 下面的选项,添加如下选项

[metadata_proxy_shared_secret = METADATA_SECRET   # METADATA_SECRET 替换为自己的密码 ]

[DEFAULT]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_region = RegionOne

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = neutron

nova_metadata_ip = controller

metadata_proxy_shared_secret = metadata

verbose = True

## 配置 Networking 服务

编辑 /etc/nova/nova.conf 配置文件

[metadata_proxy_shared_secret = METADATA_SECRET   # METADATA_SECRET 替换为自己的密码 ]

[neutron]

...

url = http://controller:9696

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = True

metadata_proxy_shared_secret = metadata

## 创建一些链接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

## 同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

INFO  [alembic.runtime.migration] Context impl MySQLImpl.

........

OK

## 重启 api 服务

[root@controller ~]# systemctl restart openstack-nova-api.service

## 启动 network 服务

[root@controller ~]# systemctl enable neutron-server.service \

> neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

> neutron-metadata-agent.service

[root@controller ~]# systemctl start neutron-server.service \

> neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

> neutron-metadata-agent.service

## 配置 Compute 节点的 network 服务

## 安装 network 软件

[root@computer1 ~]# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset

## 编辑 /etc/neutron/neutron.conf 配置文件

注释掉 [database] 部分的所有选项,计算节点 不连接 数据库

[DEFAULT]

...

rpc_backend = rabbit

auth_strategy = keystone

verbose = True

[oslo_messaging_rabbit]

...

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = openstack

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = neutron

注释掉 [keystone_authtoken]  其他部分的选项。

[oslo_concurrency]

...

lock_path = /var/lib/neutron/tmp

##  安装 network 服务

## 配置 Linux bridge 服务

编辑  /etc/neutron/plugins/ml2/linuxbridge_agent.ini 配置文件

[linux_bridge]

physical_interface_mappings = public:em2

[vxlan]

enable_vxlan = False

[agent]

...

prevent_arp_spoofing = True

[securitygroup]

...

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

## 配置 Networking 服务

## 编辑 /etc/nova/nova.conf 配置文件

[neutron]

...

url = http://controller:9696

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

## 配置链接, 并启动服务

[root@computer1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

[root@computer1 ~]# systemctl restart openstack-nova-compute.service

[root@computer1 ~]# systemctl enable neutron-linuxbridge-agent.service

[root@computer1 ~]# systemctl start neutron-linuxbridge-agent.service

## 最后 验证服务

[root@controller ~]# source admin-openrc.sh

[root@controller ~]# neutron ext-list

+-----------------------+--------------------------+

| alias                 | name                     |

+-----------------------+--------------------------+

| flavors               | Neutron Service Flavors  |

| security-group        | security-group           |

| dns-integration       | DNS Integration          |

| net-mtu               | Network MTU              |

| port-security         | Port Security            |

| binding               | Port Binding             |

| provider              | Provider Network         |

| agent                 | agent                    |

| quotas                | Quota management support |

| subnet_allocation     | Subnet Allocation        |

| dhcp_agent_scheduler  | DHCP Agent Scheduler     |

| rbac-policies         | RBAC Policies            |

| external-net          | Neutron external network |

| multi-provider        | Multi Provider Network   |

| allowed-address-pairs | Allowed Address Pairs    |

| extra_dhcp_opt        | Neutron Extra DHCP opts  |

+-----------------------+--------------------------+

[root@controller ~]# neutron agent-list

+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

| 3083cb45-f248-4de5-828e-38d9bc961814 | Linux bridge agent | controller | :-)   | True           | neutron-linuxbridge-agent |

| 3b2564c7-e797-4e20-a257-166ade425655 | Linux bridge agent | computer1  | :-)   | True           | neutron-linuxbridge-agent |

| b61a8667-416d-4ed5-b14d-87d568bab59f | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |

| f87d6cc0-4277-4259-96ed-638294503236 | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |

+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

六、安装配置 dashboard

[root@controller ~]# yum install openstack-dashboard -y

## 编辑/etc/openstack-dashboard/local_settings 配置文件

[root@controller ~]# vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*', ]

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': '127.0.0.1:11211',

}

}

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

TIME_ZONE = "Asia/Shanghai"

## 启动服务

[root@controller ~]# systemctl enable httpd.service memcached.service

[root@controller ~]# systemctl restart httpd.service memcached.service

## 验证服务

浏览器 访问 http://controller/dashboard

帐号密码为 admin

七、 安装配置 Block Storage service

控制端

## 添加数据库

[root@controller ~]# mysql -u root -p

Enter password:

MariaDB [(none)]> CREATE DATABASE cinder;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \

-> IDENTIFIED BY 'cinder';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \

-> IDENTIFIED BY 'cinder';

## 添加 openstack  cinder用户

[root@controller ~]# source admin-openrc.sh

[root@controller ~]# openstack user create --domain default --password-prompt cinder

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     | Value                            |

+-----------+----------------------------------+

| domain_id | default                          |

| enabled   | True                             |

| id        | 212ced82b0a844f099a80a712786740c |

| name      | cinder                           |

+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user cinder admin

## 添加 cinder v2 service 服务

[root@controller ~]# openstack service create --name cinder \

> --description "OpenStack Block Storage" volume

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Block Storage          |

| enabled     | True                             |

| id          | 6af1b2a1843f42058b03368960ae6b09 |

| name        | cinder                           |

| type        | volume                           |

+-------------+----------------------------------+

[root@controller ~]# openstack service create --name cinderv2 \

> --description "OpenStack Block Storage" volumev2

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Block Storage          |

| enabled     | True                             |

| id          | 3e42f0d7868745579ce658d88eef4c67 |

| name        | cinderv2                         |

| type        | volumev2                         |

+-------------+----------------------------------+

## 添加 Block Storage service API

[root@controller ~]# openstack endpoint create --region RegionOne \

> volume public http://controller:8776/v1/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | 14ee6bd40f08489bb314e5cc5fa39a80        |

| interface    | public                                  |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 6af1b2a1843f42058b03368960ae6b09        |

| service_name | cinder                                  |

| service_type | volume                                  |

| url          | http://controller:8776/v1/%(tenant_id)s |

+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> volume internal http://controller:8776/v1/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | 5895f13f971a4ee390ef104ddbe7422b        |

| interface    | internal                                |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 6af1b2a1843f42058b03368960ae6b09        |

| service_name | cinder                                  |

| service_type | volume                                  |

| url          | http://controller:8776/v1/%(tenant_id)s |

+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> volume admin http://controller:8776/v1/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | e4993535c6504f95862cab4f7ab7cccb        |

| interface    | admin                                   |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 6af1b2a1843f42058b03368960ae6b09        |

| service_name | cinder                                  |

| service_type | volume                                  |

| url          | http://controller:8776/v1/%(tenant_id)s |

+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> volumev2 public http://controller:8776/v2/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | 8696e7142c17488e973c653713c14379        |

| interface    | public                                  |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 3e42f0d7868745579ce658d88eef4c67        |

| service_name | cinderv2                                |

| service_type | volumev2                                |

| url          | http://controller:8776/v2/%(tenant_id)s |

+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> volumev2 internal http://controller:8776/v2/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | da6b9e902a4d413dadd951ec751b1082        |

| interface    | internal                                |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 3e42f0d7868745579ce658d88eef4c67        |

| service_name | cinderv2                                |

| service_type | volumev2                                |

| url          | http://controller:8776/v2/%(tenant_id)s |

+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        | Value                                   |

+--------------+-----------------------------------------+

| enabled      | True                                    |

| id           | 0cfa27d0a5b946e8b35901d242ac165f        |

| interface    | admin                                   |

| region       | RegionOne                               |

| region_id    | RegionOne                               |

| service_id   | 3e42f0d7868745579ce658d88eef4c67        |

| service_name | cinderv2                                |

| service_type | volumev2                                |

| url          | http://controller:8776/v2/%(tenant_id)s |

+--------------+-----------------------------------------+

## 安装以及配置

[root@controller ~]# yum install openstack-cinder python-cinderclient -y

## 编辑 /etc/cinder/cinder.conf 配置文件

[root@controller ~]# vim /etc/cinder/cinder.conf

[database]

...

connection = mysql://cinder:cinder@controller/cinder

[DEFAULT]

...

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.6.17.11

verbose = True

[oslo_messaging_rabbit]

...

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = openstack

注释掉 [keystone_authtoken] 其他部分的选项。添加如下内容:

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = cinder

[oslo_concurrency]

...

lock_path = /var/lib/cinder/tmp

## 同步数据库

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

## 编辑  /etc/nova/nova.conf 配置文件

[cinder]

os_region_name = RegionOne

## 启动服务

[root@controller ~]# systemctl restart openstack-nova-api.service

[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

## Storage 节点操作

## 安装软件

[root@block1 ~]# yum install lvm2

## 启动 LVM2 服务

[root@block1 ~]# systemctl enable lvm2-lvmetad.service

[root@block1 ~]# systemctl start lvm2-lvmetad.service

## 创建分区

[root@block1 ~]# fdisk -l

磁盘 /dev/sda:599.6 GB, 599550590976 字节,1170997248 个扇区

Units = 扇区 of 1 * 512 = 512 bytes

扇区大小(逻辑/物理):512 字节 / 512 字节

I/O 大小(最小/最佳):512 字节 / 512 字节

磁盘标签类型:dos

磁盘标识符:0x000e0185

设备 Boot      Start         End      Blocks   Id  System

/dev/sda1   *        2048     2099199     1048576   83  Linux

/dev/sda2         2099200   211814399   104857600   83  Linux

/dev/sda3       211814400   211830783        8192   82  Linux swap / Solaris

## 使用 parted 软件,查看分区

[root@block1 ~]# parted /dev/sda

GNU Parted 3.1

使用 /dev/sda

Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) p

Model: Dell Virtual Disk (scsi)

Disk /dev/sda: 600GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Disk Flags:

Number  Start   End     Size    Type     File system     标志

1      1049kB  1075MB  1074MB  primary  xfs             启动

2      1075MB  108GB   107GB   primary  xfs

3      108GB   108GB   8389kB  primary  linux-swap(v1)

## 使用 parted 软件, 创建分区

(parted) mkpart primary 108GB 208GB

(parted) p

Model: Dell Virtual Disk (scsi)

Disk /dev/sda: 600GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Disk Flags:

Number  Start   End     Size    Type     File system     标志

1      1049kB  1075MB  1074MB  primary  xfs             启动

2      1075MB  108GB   107GB   primary  xfs

3      108GB   108GB   8389kB  primary  linux-swap(v1)

4      108GB   208GB   99.5GB  primary

[root@block1 ~]# fdisk -l

磁盘 /dev/sda:599.6 GB, 599550590976 字节,1170997248 个扇区

Units = 扇区 of 1 * 512 = 512 bytes

扇区大小(逻辑/物理):512 字节 / 512 字节

I/O 大小(最小/最佳):512 字节 / 512 字节

磁盘标签类型:dos

磁盘标识符:0x000e0185

设备 Boot      Start         End      Blocks   Id  System

/dev/sda1   *        2048     2099199     1048576   83  Linux

/dev/sda2         2099200   211814399   104857600   83  Linux

/dev/sda3       211814400   211830783        8192   82  Linux swap / Solaris

/dev/sda4       211830784   406249471    97209344   83  Linux

[root@block1 ~]# pvcreate /dev/sda4

Physical volume "/dev/sda4" successfully created

[root@block1 ~]# vgcreate cinder-volumes /dev/sda4

Volume group "cinder-volumes" successfully created

## 编辑 /etc/lvm/lvm.conf  配置文件,添加分区

devices {

...

filter = [ "a/sda4/", "r/.*/"]

## 安装 openstack 软件

[root@block1 ~]# yum install openstack-cinder targetcli python-oslo-policy -y

## 编辑 /etc/cinder/cinder.conf 配置文件

[database]

...

connection = mysql://cinder:cinder@controller/cinder

[DEFAULT]

...

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.6.17.13

enabled_backends = lvm

glance_host = controller

verbose = True

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group = cinder-volumes

iscsi_protocol = iscsi

iscsi_helper = lioadm

[oslo_messaging_rabbit]

...

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = openstack

# 注释掉[keystone_authtoken]下其他所有的选项 只使用下面

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = cinder

[oslo_concurrency]

...

lock_path = /var/lib/cinder/tmp

## 启动 服务

[root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service

[root@block1 ~]# systemctl start openstack-cinder-volume.service target.service

## 验证服务

[root@controller ~]# source admin-openrc.sh

[root@controller ~]# cinder service-list

+------------------+------------+------+---------+-------+----------------------------+-----------------+

|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |

+------------------+------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler | controller | nova | enabled |   up  | 2015-12-04T07:35:57.000000 |        -        |

|  cinder-volume   | block1@lvm | nova | enabled |   up  | 2015-12-04T07:36:03.000000 |        -        |

+------------------+------------+------+---------+-------+----------------------------+-----------------+

八、 安装配置 Object Storage 服务

## 创建 openstack 帐号

[root@controller ~]# openstack user create --domain default --password-prompt swift

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     | Value                            |

+-----------+----------------------------------+

| domain_id | default                          |

| enabled   | True                             |

| id        | 7a2915af03464268ab80ef732aa5ba93 |

| name      | swift                            |

+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user swift admin

## 创建 swift service

[root@controller ~]# openstack service create --name swift \

> --description "OpenStack Object Storage" object-store

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Object Storage         |

| enabled     | True                             |

| id          | f7b97bc04bd24d67b12694bb3004843b |

| name        | swift                            |

| type        | object-store                     |

+-------------+----------------------------------+

## 创建 Object Storage service API

[root@controller ~]# openstack endpoint create --region RegionOne \

> object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s

+--------------+----------------------------------------------+

| Field        | Value                                        |

+--------------+----------------------------------------------+

| enabled      | True                                         |

| id           | f8d7a017753947d981e0f42aa5fe4bde             |

| interface    | public                                       |

| region       | RegionOne                                    |

| region_id    | RegionOne                                    |

| service_id   | f7b97bc04bd24d67b12694bb3004843b             |

| service_name | swift                                        |

| service_type | object-store                                 |

| url          | http://controller:8080/v1/AUTH_%(tenant_id)s |

+--------------+----------------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s

+--------------+----------------------------------------------+

| Field        | Value                                        |

+--------------+----------------------------------------------+

| enabled      | True                                         |

| id           | 6d125e48860b4f948300b412f744b23f             |

| interface    | internal                                     |

| region       | RegionOne                                    |

| region_id    | RegionOne                                    |

| service_id   | f7b97bc04bd24d67b12694bb3004843b             |

| service_name | swift                                        |

| service_type | object-store                                 |

| url          | http://controller:8080/v1/AUTH_%(tenant_id)s |

+--------------+----------------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \

> object-store admin http://controller:8080/v1

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | cdbea58743c946cca41c98682ea98793 |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | f7b97bc04bd24d67b12694bb3004843b |

| service_name | swift                            |

| service_type | object-store                     |

| url          | http://controller:8080/v1        |

+--------------+----------------------------------+

## 安装配置软件

[root@controller ~]# yum install openstack-swift-proxy python-swiftclient \

> python-keystoneclient python-keystonemiddleware \

> memcached

## 下载swift配置文件

[root@controller ~]# curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/liberty

##编辑 /etc/swift/proxy-server.conf 配置文件

[DEFAULT]

...

bind_port = 8080

user = swift

swift_dir = /etc/swift

[pipeline:main]

pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]

use = egg:swift#proxy

...

account_autocreate = true

[filter:keystoneauth]

use = egg:swift#keystoneauth

...

operator_roles = admin,user

#注释掉[filter:authtoken] 其他所有的选项只使用下面

[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = swift

password = swift

delay_auth_decision = true

[filter:cache]

use = egg:swift#memcache

...

memcache_servers = 127.0.0.1:11211

## Object storage 节点配置

## 安装配置软件

[root@object1 ~]# yum install xfsprogs rsync -y

[root@object1 ~]# fdisk -l

磁盘 /dev/sda:599.6 GB, 599550590976 字节,1170997248 个扇区

Units = 扇区 of 1 * 512 = 512 bytes

扇区大小(逻辑/物理):512 字节 / 512 字节

I/O 大小(最小/最佳):512 字节 / 512 字节

磁盘标签类型:dos

磁盘标识符:0x000bbc86

设备 Boot      Start         End      Blocks   Id  System

/dev/sda1   *        2048   204802047   102400000   83  Linux

/dev/sda2       204802048   221186047     8192000   82  Linux swap / Solaris

/dev/sda3       221186048   611327999   195070976   83  Linux

/dev/sda4       611328000  1170997247   279834624   83  Linux

[root@object1 ~]# mkfs.xfs -f /dev/sda3

[root@object1 ~]# mkfs.xfs -f /dev/sda4

[root@object1 ~]# mkdir -p /opt/node/sda3

[root@object1 ~]# mkdir -p /opt/node/sda4

## 编辑 /etc/fstab  让系统启动自动加载

/dev/sda3 /opt/node/sda3 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

/dev/sda4 /opt/node/sda4 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

## 挂载硬盘

[root@object1 ~]# mount -a

[root@object1 ~]# df -h

文件系统        容量  已用  可用 已用% 挂载点

/dev/sda1        98G  1.2G   97G    2% /

devtmpfs        7.8G     0  7.8G    0% /dev

tmpfs           7.8G     0  7.8G    0% /dev/shm

tmpfs           7.8G  8.4M  7.8G    1% /run

tmpfs           7.8G     0  7.8G    0% /sys/fs/cgroup

/dev/sda3       186G   33M  186G    1% /opt/node/sda3

/dev/sda4       267G   33M  267G    1% /opt/node/sda4

## 编辑 /etc/rsyncd.conf 配置文件

[root@object1 ~]# vim /etc/rsyncd.conf

--------------------------------------------------------

uid = swift

gid = swift

log file = /var/log/rsyncd.log

pid file = /var/run/rsyncd.pid

address = 10.6.17.14

[account]

max connections = 2

path = /opt/node/

read only = false

lock file = /var/lock/account.lock

[container]

max connections = 2

path = /opt/node/

read only = false

lock file = /var/lock/container.lock

[object]

max connections = 2

path = /opt/node/

read only = false

lock file = /var/lock/object.lock

--------------------------------------------------------

## 启动服务

[root@object1 ~]# systemctl enable rsyncd.service

[root@object1 ~]# systemctl start rsyncd.service

## 安装配置软件

[root@object1 ~]# yum install openstack-swift-account openstack-swift-container \

> openstack-swift-object

## 下载配置文件

[root@object1 ~]# curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/liberty

[root@object1 ~]# curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/liberty

[root@object1 ~]# curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/liberty

## 编辑/etc/swift/account-server.conf 配置文件

[DEFAULT]

...

bind_ip = 10.6.17.14

bind_port = 6002

user = swift

swift_dir = /etc/swift

devices = /opt/node

mount_check = true

[pipeline:main]

pipeline = healthcheck recon account-server

[filter:recon]

use = egg:swift#recon

...

recon_cache_path = /var/cache/swift

## 编辑/etc/swift/container-server.conf 配置文件

[DEFAULT]

...

bind_ip = 10.6.17.14

bind_port = 6001

user = swift

swift_dir = /etc/swift

devices = /opt/node

mount_check = true

[pipeline:main]

pipeline = healthcheck recon container-server

[filter:recon]

use = egg:swift#recon

...

recon_cache_path = /var/cache/swift

## 编辑/etc/swift/object-server.conf 配置文件

[DEFAULT]

...

bind_ip = 10.6.17.14

bind_port = 6000

user = swift

swift_dir = /etc/swift

devices = /opt/node

mount_check = true

[pipeline:main]

pipeline = healthcheck recon object-server

[filter:recon]

use = egg:swift#recon

...

recon_cache_path = /var/cache/swift

recon_lock_path = /var/lock

## 授权目录

[root@object1 ~]# chown -R swift:swift /opt/node

## 创建 swift 缓存 目录 cache/swift

[root@object1 ~]# mkdir -p /var/cache/swift

[root@object1 ~]# chown -R swift:swift /var/cache/swift

## 在controller 上创建 ring 帐号

## 创建 account.builder 文件

[root@controller ~]# cd /etc/swift/

[root@controller swift]# swift-ring-builder account.builder create 10 2 1

## 添加 object 节点

[root@controller swift]# swift-ring-builder account.builder add \

> --region 1 --zone 1 --ip 10.6.17.14 --port 6002 --device sda3 --weight 100

输出:

Device d0r1z1-10.6.17.14:6002R10.6.17.14:6002/sda3_"" with 100.0 weight got id 0

[root@controller swift]# swift-ring-builder account.builder add \

> --region 1 --zone 2 --ip 10.6.17.14 --port 6002 --device sda4 --weight 100

输出:

Device d1r1z2-10.6.17.14:6002R10.6.17.14:6002/sda4_"" with 100.0 weight got id 1

## 验证ring

[root@controller swift]# swift-ring-builder account.builder

account.builder, build version 2

1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 2 devices, 100.00 balance, 0.00 dispersion

The minimum number of hours before a partition can be reassigned is 1

The overload factor is 0.00% (0.000000)

Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta

0       1     1      10.6.17.14  6002      10.6.17.14              6002      sda3 100.00          0 -100.00

1       1     2      10.6.17.14  6002      10.6.17.14              6002      sda4 100.00          0 -100.00

# 平衡 ring

[root@controller swift]# swift-ring-builder account.builder rebalance

.......

Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00

## 创建容器 ring

[root@controller ~]# cd /etc/swift/

[root@controller swift]# swift-ring-builder container.builder create 10 2 1

## 创建存储节点ring

[root@controller swift]# swift-ring-builder container.builder add \

> --region 1 --zone 1 --ip 10.6.17.14 --port 6001 --device sda3 --weight 100

输出:

Device d0r1z1-10.6.17.14:6001R10.6.17.14:6001/sda3_"" with 100.0 weight got id 0

[root@controller swift]# swift-ring-builder container.builder add \

> --region 1 --zone 2 --ip 10.6.17.14 --port 6001 --device sda4 --weight 100

输出:

Device d1r1z2-10.6.17.14:6001R10.6.17.14:6001/sda4_"" with 100.0 weight got id 1

## 验证容器 ring

[root@controller swift]# swift-ring-builder container.builder

container.builder, build version 2

1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 2 devices, 100.00 balance, 0.00 dispersion

The minimum number of hours before a partition can be reassigned is 1

The overload factor is 0.00% (0.000000)

Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta

0       1     1      10.6.17.14  6001      10.6.17.14              6001      sda3 100.00          0 -100.00

1       1     2      10.6.17.14  6001      10.6.17.14              6001      sda4 100.00          0 -100.00

# 平衡 容器 ring

[root@controller swift]# swift-ring-builder container.builder rebalance

....

Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00

## 创建 object ring

[root@controller ~]# cd /etc/swift/

[root@controller swift]# swift-ring-builder object.builder create 10 2 1

[root@controller swift]# swift-ring-builder object.builder add \

> --region 1 --zone 1 --ip 10.6.17.14 --port 6001 --device sda3 --weight 100

输出:

Device d0r1z1-10.6.17.14:6001R10.6.17.14:6001/sda3_"" with 100.0 weight got id 0

[root@controller swift]# swift-ring-builder object.builder add \

> --region 1 --zone 2 --ip 10.6.17.14 --port 6001 --device sda4 --weight 100

输出:

Device d1r1z2-10.6.17.14:6001R10.6.17.14:6001/sda4_"" with 100.0 weight got id 1

## 验证 object ring

[root@controller swift]# swift-ring-builder object.builder

object.builder, build version 2

1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 2 devices, 100.00 balance, 0.00 dispersion

The minimum number of hours before a partition can be reassigned is 1

The overload factor is 0.00% (0.000000)

Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta

0       1     1      10.6.17.14  6001      10.6.17.14              6001      sda3 100.00          0 -100.00

1       1     2      10.6.17.14  6001      10.6.17.14              6001      sda4 100.00          0 -100.00

## 平衡 object ring

[root@controller swift]# swift-ring-builder object.builder rebalance

...

Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00

## 查看生成的文件

[root@controller swift]# ls

account.builder  backups            container-reconciler.conf  container-server       object.builder       object.ring.gz  proxy-server.conf

account.ring.gz  container.builder  container.ring.gz          container-server.conf  object-expirer.conf  proxy-server    swift.conf

## 复制 account.ring.gz  container.ring.gz    object.ring.gz  到 其他 运行 代理服务 的节点/etc/swift 目录中  ,然后启动 openstack-swift-proxy.service 服务

## Object Storage 下载 swift.conf 配置文件

[root@object1 swift]# curl -o /etc/swift/swift.conf \

> https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/liberty

## 编辑 /etc/swift/swift.conf 配置文件

[root@object1 swift]# vim /etc/swift/swift.conf

[swift-hash]

...

swift_hash_path_suffix = changeme

swift_hash_path_prefix = changeme

[storage-policy:0]

...

name = Policy-0

default = yes

## 授权 目录

[root@object1 swift]# chown -R root:swift /etc/swift

## 在controller 以及 其他代理节点中 启动 代理服务

[root@controller ~]# systemctl enable openstack-swift-proxy.service memcached.service

[root@controller ~]# systemctl start openstack-swift-proxy.service memcached.service

## 在 Object Storage 节点中 启动服务

[root@object1 ~]# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \

> openstack-swift-account-reaper.service openstack-swift-account-replicator.service

[root@object1 ~]# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \

> openstack-swift-account-reaper.service openstack-swift-account-replicator.service

[root@object1 ~]# systemctl enable openstack-swift-container.service \

> openstack-swift-container-auditor.service openstack-swift-container-replicator.service \

> openstack-swift-container-updater.service

[root@object1 ~]# systemctl start openstack-swift-container.service \

> openstack-swift-container-auditor.service openstack-swift-container-replicator.service \

> openstack-swift-container-updater.service

[root@object1 ~]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \

> openstack-swift-object-replicator.service openstack-swift-object-updater.service

[root@object1 ~]# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \

> openstack-swift-object-replicator.service openstack-swift-object-updater.service

## 验证服务

[root@controller ~]# echo "export OS_AUTH_VERSION=3" \

> | tee -a admin-openrc.sh

[root@controller ~]# source admin-openrc.sh

## 查看 swift 状态

[root@controller ~]# swift stat

Account: AUTH_0d1a69b17e444cb3b4884529b6bdb372

Containers: 0

Objects: 0

Bytes: 0

X-Put-Timestamp: 1449222121.42343

X-Timestamp: 1449222121.42343

X-Trans-Id: tx20d0064672a74cd88723f-0056615fe9

Content-Type: text/plain; charset=utf-8

## 测试 上传 文件

[root@controller ~]# touch FILE

[root@controller ~]# swift upload container1 FILE

OpenStack - liberty CentOS 7的更多相关文章

  1. [译] OpenStack Liberty 版本中的53个新变化

    一个新的秋季,一个新的OpenStack 版本.OpenStack 的第12个版本,Liberty,在10月15日如期交付,而且目前发行版本已经备好了.那么我们期望能从过去六个月时间的开发中获得些什么 ...

  2. CentOS7.4安装部署openstack [Liberty版] (一)

    一.OpenStack简介 OpenStack是一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件和开放源代码项目. OpenStack是一个 ...

  3. CentOS7.4安装部署openstack [Liberty版] (二)

    继上一篇博客CentOS7.4安装部署openstack [Liberty版] (一),本篇继续讲述后续部分的内容 一.添加块设备存储服务 1.服务简述: OpenStack块存储服务为实例提供块存储 ...

  4. [OpenStack] [Liberty] Neutron单网卡桥接模式访问外网

    环境配置: * Exsi一台 * Exsi创建的单网卡虚拟机一台 * Ubuntu 14LTS 64位操作系统 * OpenStack Liberty版本 * 使用Neutron网络而非Nova网络 ...

  5. Openstack 使用Centos官方镜像创建实例记录

    Openstack 使用Centos官方镜像创建实例记录 准备centos镜像 官方地址:http://cloud.centos.org/centos/7/images 可以看到有各种版本的镜像,我在 ...

  6. openstack(liberty):部署实验平台(一,基础网络环境搭建)

    openstack项目的研究,到今天,算是要进入真实环境了,要部署实验平台了.不再用devstack了.也就是说,要独立controller,compute,storage和network了.要做这个 ...

  7. openstack(liberty):部署实验平台(三,简单版本软件安装 之cinder,swift)

    今天这里追加存储相关的部署,主要是Block和Object,为了看到效果,简单的部署在单节点上,即Block一个节点,Object对应一个节点. 读者可能会觉得我这个图和之前的两个post有点点不同, ...

  8. ceph 对接openstack liberty

    Ceph 准备工作 官方文档:http://docs.ceph.com/docs/master/rbd/rbd-openstack/ 官方中文文档:http://docs.ceph.org.cn/rb ...

  9. openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

    继续前面的part1,将后续的compute以及network部分的安装过程记录完毕! 首先说说compute部分nova的安装. n1.准备工作.创建数据库,配置权限!(密码依旧是openstack ...

随机推荐

  1. hash随笔

    hash属性是一个可读可写的字符串,是url的锚部分(从#开始).多用于单页面应用中,使其包含多个页面. 定位:通过id来定位 eg: <div id= "part1"> ...

  2. Ubuntu配置eclipse

    1.安装jdk 去官网下载最新版jdk,目前是 jdk-8u45-linux-x64.tar.gz 创建Java的目标路径文件夹,这里我们放在/usr/lib/jvm下面.在终端下操作: sudo m ...

  3. WordPress 邮箱防抓取

    现在网络上有很多爬虫,专门四处搜集网站代码中出现的邮箱,搜集到了之后就批量出售或者发送垃圾邮件.很多人都把邮箱中的 “@” 换成 “#”,但这样对用户不太方便,而且这种方法很多机器人都可以识破,同样被 ...

  4. Zookeeper: configuring on centos7

    thispassage is referenced, appreciated. ZooKeeper installation: Download from this site Install java ...

  5. Arch: Configurations

    the original purpose is to show the steps needed to setup i3 in vbox.. easy. alright, it is a bit mi ...

  6. easyui实现权限管理

    在js中: function makeEasyTree(data){ if(!data) return []; var _newData = []; //最终返回结果 var _treeArray = ...

  7. 在CDockablePane中嵌入CFormView

    CDockablePane中嵌入CFormView与嵌入CDialogEx稍有不同,差异主要体现在CFormView类本身与CDialogEx类的不同上,CDockablePane层面的操作完全相同. ...

  8. lucene 中关于Store.YES 关于Store.NO的解释

    总算搞明白 lucene 中关于Store.YES  关于Store.NO的解释了 一直对Lucene Store.YES不太理解,网上多数的说法是存储字段,NO为不存储. 这样的解释有点郁闷:字面意 ...

  9. linux 用户管理维护 清缓存

    #echo 1 > /proc/sys/ vm/drop_caches 2013.10.10 其实一直user group一直都没去弄清楚 只是没去归类,@@一种是对用户/组直接修改(同时也更改 ...

  10. elasticsearch 管理工具

    ------------------sense------------------- google chrome 浏览器插件,数据交互使用   -------------------------hea ...