Network Time Protocol (NTP)

Controller Node

apt install chrony

Edit the /etc/chrony/chrony.conf 添加如下信息

#修改10.0.0.0/24为自己环境的网段
server controller iburst
allow 10.0.0.0/24

注释掉 pool 2.debian.pool.ntp.org offline iburst line Restart the NTP service

service chrony restart

Compute Node

apt install chrony

Edit the /etc/chrony/chrony.conf 添加如下信息

server controller iburst

注释掉 pool 2.debian.pool.ntp.org offline iburst line

service chrony restart

OpenStack packages(所有节点)

apt install software-properties-common
add-apt-repository cloud-archive:ocata
apt update && apt dist-upgrade
apt install python-openstackclient

SQL database(控制节点)

apt install mariadb-server python-pymysql

创建和配置该文件 /etc/mysql/mariadb.conf.d/99-openstack.cnf,配置信息如下。将bind-address的IP地址换成控制节点的IP地址:

[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

重启数据库服务器,初始化数据库服务器。

service mysql restart
mysql_secure_installation

Message queue(控制节点)

apt install rabbitmq-server

#替换RABBIT_PASS为自己设置的密码
rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ... rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

Memcached(控制节点)

apt install memcached python-memcache

编辑 /etc/memcached.conf 替换已经存在的 "-l 127.0.0.1" 为controller node的IP地址

-l 10.0.0.11

service memcached restart

Identity service(控制节点)

Prerequisites

mysql
CREATE DATABASE keystone; #替换KEYSTONE_DBPASS为自己的密码
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

Install and configure components

apt install keystone

编辑 /etc/keystone/keystone.conf。替换KETSTONE_DBPASS为上面数据库注册时的密码。

[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token]
provider = fernet

注释或移除[database]配置项下面的其他数据库连接

#同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone #Initialize Fernet key repositories:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone #替换ADMIN_PASS为admin用户的密码
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:35357/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

编辑 /etc/apache2/apache2.conf file添加下面的配置信息

ServerName controller

Finalize the installation

service apache2 restart
rm -f /var/lib/keystone/keystone.db

配置 administrative account,替换ADMIN_PASS为自己创建admin用户时的密码

export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

Create a domain, projects, users, and roles

openstack project create --domain default --description "Service Project" service

openstack project create --domain default --description "Demo Project" demo

openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password: openstack role create user openstack role add --project demo --user demo user

Verify operation

Forsecurityreasons,disablethetemporaryauthenticationtokenmechanism:

编辑Edit the /etc/keystone/keystone-paste.ini 文件,去掉 "admin_token_auth"从下面配置项中
[pipeline:public_api]
[pipeline:admin_api]
[pipeline:api_v3] 取消环境变量
unset OS_AUTH_URL OS_PASSWORD

Creating the scripts

创建 amind-openrc文件,填如下内容
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2 创建 demo-openrc文件,填如下内容
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

Image service

Prerequisites

mysql
CREATE DATABASE glance; 替换GLANCE_DBPASS为自己的密码
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

. admin-openrc

openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password: openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

Install and configure components

apt install glance

编辑 the /etc/glance/glance-api.conf file。 替换GLANCE_DBPASS和GLANCE_PASS为设定密码。

[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone [glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

编辑 the /etc/glance/glance-registry.conf file,替换两个GLANCE_DBPASS为设定密码。

[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS [paste_deploy]
flavor = keystone

su -s /bin/sh -c "glance-manage db_sync" glance

#Restart the Image services:
service glance-registry restart
service glance-api restart

Verify operation

. admin-openrc

wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

openstack image create "cirros" \
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public

Compute service

Install and configure controller node

Prerequisites

mysql
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0; 修改*_DBPASS为自己的密码
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; . admin-openrc #创建NOVA用户
openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password: openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 #创建placement用户
openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password: openstack role add --project service --user placement admin openstack service create --name placement --description "Placement API" placement openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778

Install and configure components

apt install nova-api nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler nova-placement-api

Edit the /etc/nova/nova.conf,替换NOVA_DBPASS为自己的密码

[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.0.0.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver [api]
auth_strategy = keystone #替换NOVA_PASS为自己的密码
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS [vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip [glance]
api_servers = http://controller:9292 [oslo_concurrency]
lock_path = /var/lib/nova/tmp 因为一个bug的原因,要移除log_dir从[default]配置项
#替换PLACEMENT_PASS为自己密码
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650 su -s /bin/sh -c "nova-manage db sync" nova service nova-api restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

Install and configure a compute node

apt install nova-compute

编辑 the /etc/nova/nova.conf,替换所有的密码为自己的密码。

#替换my_ip的ip地址为compute node ip地址
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver [api]
auth_strategy = keystone [keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS [vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html [glance]
api_servers = http://controller:9292 [oslo_concurrency]
lock_path = /var/lib/nova/tmp #替换PLACEMENT_PASS为自己的密码
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS

如果是使用虚拟机,则如下操作:

编辑 the [libvirt] 配置项 in the /etc/nova/nova-compute.conf

[libvirt]
virt_type = qemu

service nova-compute restart

Add the compute node to the cell database

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

或者在 /etc/nova/nova.conf文件中添加如下配置信息:

[scheduler]
discover_hosts_in_cells_interval = 300

Networking service

Install and configure controller node

Prerequisites

mysql
CREATE DATABASE neutron; #替换NEUTRON_DAPASS为自己的密码
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; . admin-openrc openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password: openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696

[安装 neutron 软件包]

apt-get install neutron-server neutron-plugin-ml2 neutron-openvswitch-agent neutron-l3-agent
neutron-dhcp-agent neutron-metadata-agent python-neutronclient

编辑 /etc/neutron/neutron.conf 文件

[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
注:注释掉 其他sqlite连接 [DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True [oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = stack2015 #替换密码为自己的keystone的密码
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = stack2015 #替换密码为自己的nova密码
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = stack2015

[修改 ml2 配置文件]

配置 /etc/neutron/plugins/ml2/ml2_conf.ini 文件

[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
extension_drivers = port_security [ml2_type_flat]
flat_networks = external [ml2_type_vxlan]
vni_ranges = 1:1000 [securitygroup]
enable_ipset = True

修改 etc/neutron/plugins/ml2/openvswitch_agent.ini 在[ovs]增加

#local_ip为隧道VTEP的地址,可以为管理网卡IP地址,也可以是隧道特定网卡地址
[ovs]
local_ip = TUNNELS_IP
bridge_mappings = external:br-ex [agent]
tunnel_types = vxlan
l2_population = True
prevent_arp_spoofing = True [securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[更新 L3 配置]

配置 /etc/neutron/l3_agent.ini

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =br-ex

配置 /etc/neutron/dhcp_agent.ini

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True

编辑 /etc/neutron/dhcp_agent.ini 在[DEFAULT]选项中添加

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

创建/etc/neutron/dnsmasq-neutron.conf 文件

echo 'dhcp-option-force=26,1450' | sudo tee /etc/neutron/dnsmasq-neutron.conf

编辑/etc/neutron/metadata_agent.ini 在[DEFAULT]部分加入以下设置

nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_SECRET

修改控制节点 nova 配置文件中[neutron]部分

配置/etc/nova/nova.conf,修改密码为自己的密码

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = stack2015
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET

[同步 neutron 数据库]

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

同步过程大概 2-3 分钟左右

[重启 Nova API Server]

service nova-api restart

启动 openvswitch

service openvswitch-switch restart

增加用于外部网络的网桥

ovs-vsctl add-br br-ex

向外部网桥添加物理网卡

ovs-vsctl add-port br-ex  enp3s0(外网网卡)

关闭网卡的 GRO 功能

ethtool -K enp3s0 gro off

[重启 Neutron 服务]

service neutron-server restart
service openvswitch-switch restart
service neutron-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart

验证 Neutron client 來查看外部网络

. admin-openrc
neutron ext-list

验证 Neutron client 來查看 Agents 状态

neutron agent-list

Install and configure compute node

[安装计算节点 neutron 软件包]

apt-get install  neutron-plugin-ml2 neutron-openvswitch-age

编辑/etc/neutron/neutron.conf 在[DEFAULT]部分加入以下设置

[DEFAULT]
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

在[database]部分將所有 connection 与 sqlite 相关的参数注释

[database]

# connection = sqlite:////var/lib/neutron/neutron.sqlite

[oslo_messaging_rabbit]部分加入以下设置

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = stack2015

[keystone_authtoken]部分加入以下设置

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = stack2015

配置修改/etc/neutron/plugins/ml2/ml2_conf.ini 设置如下

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch [ml2_type_vxlan]
vni_ranges = 1:1000 [securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

配置修改/etc/neutron/plugins/ml2/openvswitch_agent.ini 设置如下

[agent]
tunnel_types = vxlan
l2_population = False
prevent_arp_spoofing = False
arp_responder = False
vxlan_udp_port = 4789 [ovs]
local_ip = 172.171.4.211
tunnel_type = vxlan
tunnel_bridge = br-tun
integration_bridge = br-int
tunnel_id_ranges = 1:1000
tenant_network_type = vxlan
enable_tunneling = True [securitygroup]
enable_ipset = True
enable_security_group = False
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

配置/etc/nova/nova.conf 在[neutron]中添加如下信息,修改密码为自己的密码

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = stack2015

[重启 nova-compute]

service nova-compute restart

[重启 Open vSwitch Agent]

service openvswitch-switch restart
service neutron-openvswitch-agent restart

[验证计算节点 neutron]

. admin-openrc
neutron agent-list

Block Storage service

Install and configure controller node

mysql
CREATE DATABASE cinder; #替换CINDER_DBPASS为自己的密码
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; . admin-openrc openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password: openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

Install and configure components

apt install cinder-api cinder-scheduler

编辑 the /etc/cinder/cinder.conf 配置

[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder #修改my_ip值为controller node 的IP地址
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 10.0.0.11 #修改CINDER_PASS为自己的密码
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS [oslo_concurrency]
lock_path = /var/lib/cinder/tmp

Edit the /etc/nova/nova.conf file and add the following to it:

[cinder]
os_region_name = RegionOne

service nova-api restart

su -s /bin/sh -c "cinder-manage db sync" cinder

Finalize installation

service nova-api restart

service cinder-scheduler restart
service apache2 restart

Install and configure a storage node

Prerequisites

apt install lvm2

根据环境中硬盘的盘符来写,如sdb sda sdc等。在这一步之前必须要先添加一块硬盘到cinder_storage中

pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created

In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices。在cinder_node节点上过滤非添加硬盘。

devices {

filter = [ "a/sdb/", "r/.*/"]

Install and configure components

apt install cinder-volume

Edit the /etc/cinder/cinder.conf 替换所有的密码为自己的密码

#注释掉其他数据库连接
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder #替换RABIIT_PASS为自己的密码
#替换my_ip为cinder_storage 的IP地址
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
enabled_backends = lvm
glance_api_servers = http://controller:9292 #替换CINDER_PASS为自己密码
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS [lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm [oslo_concurrency]
lock_path = /var/lib/cinder/tmp

Finalize installation

service tgt restart
service cinder-volume restart

swift

controller

Prerequisites

. admin-openrc
openstack user create --domain default --password-prompt swift
User Password:
Repeat User Password: openstack role add --project service --user swift admin openstack service create --name swift --description "OpenStack Object Storage" object-store openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1

Install and configure components

apt-get install swift swift-proxy python-swiftclient \
python-keystoneclient python-keystonemiddleware memcached

Create the /etc/swift directory.

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/newton

Edit the /etc/swift/proxy-server.conf 替换所有的密码

[DEFAULT]

bind_port = 8080
user = swift
swift_dir = /etc/swift [pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server [app:proxy-server]
use = egg:swift#proxy
account_autocreate = True [filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user [filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = True [filter:cache]
use = egg:swift#memcache
memcache_servers = controller:11211

storage node

Prerequisites

apt-get install xfsprogs rsync

#执行之前要确认是否有添加硬盘,盘符要明确
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc Edit the /etc/fstab file 添加如下信息
/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 mount /srv/node/sdb
mount /srv/node/sdc

Create or edit the /etc/rsyncd.conf 替换IP地址为storage node的IP地址

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = MANAGEMENT_INTERFACE_IP_ADDRESS [account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock [container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock [object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

Edit the /etc/default/rsync file and enable the rsync service:

RSYNC_ENABLE=true

Start the rsync service:

service rsync start

Install and configure components

apt-get install swift swift-account swift-container swift-object

curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton
curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton
curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton

Edit the /etc/swift/account-server.conf 替换IP地址为storage node的IP地址

[DEFAULT]
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True [pipeline:main]
pipeline = healthcheck recon account-server [filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

Edit the /etc/swift/container-server.conf file 替换IP地址为storage node的IP地址

[DEFAULT]
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True [pipeline:main]
pipeline = healthcheck recon container-server [filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

Edit the /etc/swift/object-server.conf 替换IP地址为storage node的IP地址

[DEFAULT]
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True [pipeline:main]
pipeline = healthcheck recon object-server [filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

chown -R swift:swift /srv/node
mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

Create account ring(controller node)

切换到  /etc/swift directory.

Create the base account.builder file。数字比例为( 10 节点数 1)

swift-ring-builder account.builder create 10 3 1

swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.0.51 --port 6202 --device sdb --weight 100

swift-ring-builder account.builder

swift-ring-builder account.builder rebalance

Create the base container.builder file:数字比例为( 10 节点数 1)

swift-ring-builder container.builder create 10 3 1

swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.0.51 --port 6201 --device sdb --weight 100

swift-ring-builder container.builder

swift-ring-builder container.builder rebalance

Create the base object.builder file:数字比例为( 10 节点数 1)

swift-ring-builder object.builder create 10 3 1

swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.0.51 --port 6200 --device sdb --weight 100

swift-ring-builder object.builder

swift-ring-builder object.builder rebalance

Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the

/etc/swift directory on each storage node and any additional nodes running the proxy service

拷贝生成的ring.gz文件到所有的storage node的/etc/swift文件下。

Finalize installation(controller node)

Obtain the /etc/swift/swift.conf file from the Object Storage source repository:

curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/newton

Edit the /etc/swift/swift.conf file and complete the following actions

[swift-hash]
swift_hash_path_suffix = HASH_PATH_SUFFIX
swift_hash_path_prefix = HASH_PATH_PREFIX [storage-policy:0]
name = Policy-0
default = yes

Copy the swift.conf file to the /etc/swift directory on each storage node and any additional nodes running the proxy service.(拷贝/etc/swift中的swift.conf文件到所有storage node的 /etc/swift文件夹中)

On all nodes, ensure proper ownership of the configuration directory(所有节点改变/etc/swift的用户组关系,确保权限正确)

chown -R root:swift /etc/swift

#controller node重启服务
service memcached restart
service swift-proxy restart #storage node初始化swift
swift-init all start

Dashboard

Install and configure

apt install openstack-dashboard

Edit the /etc/openstack-dashboard/local_settings.py file and complete the following actions:

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
#Do not edit the ALLOWED_HOSTS parameter under the Ubuntu configuration section. SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
} OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
} OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

Finalize installation

service apache2 reload

openstack ocata版本简化安装的更多相关文章

  1. [译] OpenStack Ocata 版本中的 53 个新功能盘点

    原文链接:https://www.mirantis.com/blog/53-new-things-to-look-for-in-openstack-ocata/ 原文作者:Nick Chase, Ra ...

  2. Openstack Ocata 负载均衡安装(二)

    Openstack OCATA 负载节点(二) 安装haproxy: apt install haproxy 配置haproxy: vim /etc/haproxy/haproxy.cfg globa ...

  3. Openstack EOL 版本离线安装源

    当Openstack EOL,网上的yum源无法安装openstack版本和rdo  ,则无法正常通过yum源直接安装openstack和rdo ,只是直接安装openstack后,也无法通过yum源 ...

  4. openstack Ocata版本 python

    from keystoneauth1.identity import v3 from keystoneauth1 import session from novaclient import clien ...

  5. 云计算之openstack ocata 项目搭建详细方法

    之前写过一篇<openstack mitaka 配置详解>然而最近使用发现阿里不再提供m版本的源,所以最近又开始学习ocata版本,并进行总结,写下如下文档 OpenStack ocata ...

  6. Centos7上部署openstack ocata配置详解

    之前写过一篇<openstack mitaka 配置详解>然而最近使用发现阿里不再提供m版本的源,所以最近又开始学习ocata版本,并进行总结,写下如下文档 OpenStack ocata ...

  7. (转)Centos7上部署openstack ocata配置详解

    原文:http://www.cnblogs.com/yaohong/p/7601470.html 随笔-124  文章-2  评论-82  Centos7上部署openstack ocata配置详解 ...

  8. OpenStack Newton版本Ceph集成部署记录

    2017年2月,OpenStack Ocata版本正式release,就此记录上一版本 Newton 结合Ceph Jewel版的部署实践.宿主机操作系统为CentOS 7.2 . 初级版: 192. ...

  9. Kolla Ocata版本安装及镜像制作流程

    1.关闭宿主机firewalldsystemctl disable firewalldsystemctl stop firewalld 2.配置selinux为disable,否则创建的实例网络不通临 ...

随机推荐

  1. React服务器端渲染值Next.js

    昨天leader给分配了新任务,让熟悉一下ssr,刚开始有点懵,啥玩意?百度了一下,不就是服务器端渲染(server side render,简称: ssr). ssr简介 服务端渲染一个很常见的场景 ...

  2. 邻里街坊 golang入坑系列

    如果要追新或者怀旧,就点击https://andy-zhangtao.gitbooks.io/golang/content/ . 博客园里面的文章基本和gitbook上面是保持同步的. 这几天看了几集 ...

  3. 【javaFX学习】(二) 面板手册--1

    找了好几个资料,没找到自己想要的,自己写个列表吧,方便以后用的时候挑选,边学边记.以学习笔记为主,所以会写的会偏个人记忆性.非教程,有什么问题一起讨论啊. 各个不同的控件放入不同的面板中有不同的效果, ...

  4. MongoDB入门学习(一):MongoDB的安装和管理

    以前用MySQL数据库,整天都是写大堆大堆的SQL语句,要记住这些SQL关键字都要花好几天时间,写的蛋都爆了,当接触到MongoDB的时候,发现不用写SQL,瞬间觉得高大上,瞬间产生了学习使用它的冲动 ...

  5. 4、C#基础 - C# 的 常见概念简述

    在上篇文章中,你跟着我写了一个HelloWorld,本篇中,我们来谈谈一些C#程序中的小概念 1.C# 程序结构 一个 C# 程序主要包括以下部分: 命名空间声明(Namespace declarat ...

  6. 大数据学习系列之三 ----- HBase Java Api 图文详解

    版权声明: 作者:虚无境 博客园出处:http://www.cnblogs.com/xuwujing CSDN出处:http://blog.csdn.net/qazwsxpcm 个人博客出处:http ...

  7. 关于hue安装后出现KeyError: "Couldn't get user id for user hue"的解决方法

    首先说明出现此问题的原因是因为你使用的root用户安装了hue,然后在root用户下使用的build/env/bin/supervisor,如下图所示那样: 知道了原因,就容易解决问题了.首先要创建个 ...

  8. Solr6.5.0配置solrcore图文详解

    准备环境: solr6.5.0安装完成 jdk1.8 solrhome配置成功 详情:

  9. js内置构造函数属性修改问题

    在学习js原型时遇到一个问题,Array,Object等内置构造函数部分属性无法修改,我猜测可能是因为浏览器实现的原因造成的. 1.修改name属性无效. <script type=" ...

  10. 认知服务调用如何使用图片的DataURL

    说明: Data URL给了我们一种很巧妙的将图片"嵌入"到HTML中的方法.跟传统的用img标记将服务器上的图片引用到页面中的方式不一样,在Data URL协议中,图片被转换成b ...