今天这里追加存储相关的部署,主要是Block和Object,为了看到效果,简单的部署在单节点上,即Block一个节点,Object对应一个节点。

读者可能会觉得我这个图和之前的两个post有点点不同,对,存储的两个节点不同,这个没有关系,之所以有着个变化,是我没有时间继续在这个项目上投入了,我要进入另一个相对更紧急的项目,不说了,计划总不如变化快。。。扯淡了。

部署cinder。

序号cx表示在controller节点上的操作,序号为ccx表示在cinder节点上的操作。

c1. 准备数据库

 mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';

c2. 创建服务

 source admin-openrc.sh

 openstack user create --domain default --password-prompt cinder
openstack role add --project service --user cinder admin openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack endpoint create --region RegionOne volume public http://node0:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://node0:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://node0:8776/v1/%\(tenant_id\)s openstack endpoint create --region RegionOne volumev2 public http://node0:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://node0:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://node0:8776/v1/%\(tenant_id\)s

c3. 安装组件

 yum install openstack-cinder python-cinderclient

c4.配置/etc/cinder/cinder.conf,下面是要修改的,配置文件中的其他部分可以保留默认的信息。

 [DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.1.100
verbose = True [database]
connection = mysql://cinder:openstack@node0/cinder [oslo_messaging_rabbit]
rabbit_host = node0
rabbit_userid = openstack
rabbit_password = openstack [keystone_authtoken]
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = openstack [oslo_concurrency]
lock_path = /var/lib/cinder/tmp

c5. 同步数据库

su -s /bin/sh -c "cinder-manage db sync" cinder
 [root@node0 opt]# su -s /bin/sh -c "cinder-manage db sync" cinder
No handlers could be found for logger "oslo_config.cfg"
/usr/lib/python2./site-packages/oslo_db/sqlalchemy/enginefacade.py:: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
-- ::25.731 INFO migrate.versioning.api [-] -> ...
-- ::27.005 INFO migrate.versioning.api [-] done
-- ::27.005 INFO migrate.versioning.api [-] -> ...
-- ::27.338 INFO migrate.versioning.api [-] done
-- ::27.339 INFO migrate.versioning.api [-] -> ...
-- ::27.396 INFO migrate.versioning.api [-] done
-- ::27.397 INFO migrate.versioning.api [-] -> ...
-- ::27.731 INFO migrate.versioning.api [-] done
-- ::27.731 INFO migrate.versioning.api [-] -> ...
-- ::27.814 INFO migrate.versioning.api [-] done
-- ::27.814 INFO migrate.versioning.api [-] -> ...
-- ::27.889 INFO migrate.versioning.api [-] done
-- ::27.889 INFO migrate.versioning.api [-] -> ...
-- ::27.964 INFO migrate.versioning.api [-] done
-- ::27.964 INFO migrate.versioning.api [-] -> ...
-- ::28.014 INFO migrate.versioning.api [-] done
-- ::28.014 INFO migrate.versioning.api [-] -> ...
-- ::28.072 INFO migrate.versioning.api [-] done
-- ::28.073 INFO migrate.versioning.api [-] -> ...
-- ::28.123 INFO migrate.versioning.api [-] done
-- ::28.124 INFO migrate.versioning.api [-] -> ...
-- ::28.214 INFO migrate.versioning.api [-] done
-- ::28.214 INFO migrate.versioning.api [-] -> ...
-- ::28.297 INFO migrate.versioning.api [-] done
-- ::28.298 INFO migrate.versioning.api [-] -> ...
-- ::28.381 INFO migrate.versioning.api [-] done
-- ::28.381 INFO migrate.versioning.api [-] -> ...
-- ::28.465 INFO migrate.versioning.api [-] done
-- ::28.465 INFO migrate.versioning.api [-] -> ...
-- ::28.489 INFO migrate.versioning.api [-] done
-- ::28.489 INFO migrate.versioning.api [-] -> ...
-- ::28.548 INFO migrate.versioning.api [-] done
-- ::28.548 INFO migrate.versioning.api [-] -> ...
-- ::28.807 INFO migrate.versioning.api [-] done
-- ::28.807 INFO migrate.versioning.api [-] -> ...
-- ::28.991 INFO migrate.versioning.api [-] done
-- ::28.992 INFO migrate.versioning.api [-] -> ...
-- ::29.074 INFO migrate.versioning.api [-] done
-- ::29.074 INFO migrate.versioning.api [-] -> ...
-- ::29.132 INFO migrate.versioning.api [-] done
-- ::29.133 INFO migrate.versioning.api [-] -> ...
-- ::29.183 INFO migrate.versioning.api [-] done
-- ::29.183 INFO migrate.versioning.api [-] -> ...
-- ::29.257 INFO migrate.versioning.api [-] done
-- ::29.257 INFO migrate.versioning.api [-] -> ...
-- ::29.349 INFO migrate.versioning.api [-] done
-- ::29.349 INFO migrate.versioning.api [-] -> ...
-- ::29.649 INFO migrate.versioning.api [-] done
-- ::29.649 INFO migrate.versioning.api [-] -> ...
-- ::30.158 INFO migrate.versioning.api [-] done
-- ::30.158 INFO migrate.versioning.api [-] -> ...
-- ::30.183 INFO migrate.versioning.api [-] done
-- ::30.184 INFO migrate.versioning.api [-] -> ...
-- ::30.191 INFO migrate.versioning.api [-] done
-- ::30.192 INFO migrate.versioning.api [-] -> ...
-- ::30.200 INFO migrate.versioning.api [-] done
-- ::30.200 INFO migrate.versioning.api [-] -> ...
-- ::30.208 INFO migrate.versioning.api [-] done
-- ::30.208 INFO migrate.versioning.api [-] -> ...
-- ::30.216 INFO migrate.versioning.api [-] done
-- ::30.217 INFO migrate.versioning.api [-] -> ...
-- ::30.233 INFO migrate.versioning.api [-] done
-- ::30.233 INFO migrate.versioning.api [-] -> ...
-- ::30.342 INFO migrate.versioning.api [-] done
-- ::30.342 INFO migrate.versioning.api [-] -> ...
/usr/lib64/python2./site-packages/sqlalchemy/sql/schema.py:: SAWarning: Table 'encryption' specifies columns 'volume_type_id' as primary_key=True, not matching locally specified columns 'encryption_id'; setting the current primary key columns to 'encryption_id'. This warning may become an exception in a future release
", ".join("'%s'" % c.name for c in self.columns)
-- ::30.600 INFO migrate.versioning.api [-] done
-- ::30.600 INFO migrate.versioning.api [-] -> ...
-- ::30.675 INFO migrate.versioning.api [-] done
-- ::30.675 INFO migrate.versioning.api [-] -> ...
-- ::30.759 INFO migrate.versioning.api [-] done
-- ::30.759 INFO migrate.versioning.api [-] -> ...
-- ::30.860 INFO migrate.versioning.api [-] done
-- ::30.860 INFO migrate.versioning.api [-] -> ...
-- ::30.942 INFO migrate.versioning.api [-] done
-- ::30.943 INFO migrate.versioning.api [-] -> ...
-- ::31.059 INFO migrate.versioning.api [-] done
-- ::31.059 INFO migrate.versioning.api [-] -> ...
-- ::31.134 INFO migrate.versioning.api [-] done
-- ::31.134 INFO migrate.versioning.api [-] -> ...
-- ::31.502 INFO migrate.versioning.api [-] done
-- ::31.502 INFO migrate.versioning.api [-] -> ...
-- ::31.577 INFO migrate.versioning.api [-] done
-- ::31.577 INFO migrate.versioning.api [-] -> ...
-- ::31.586 INFO migrate.versioning.api [-] done
-- ::31.586 INFO migrate.versioning.api [-] -> ...
-- ::31.594 INFO migrate.versioning.api [-] done
-- ::31.594 INFO migrate.versioning.api [-] -> ...
-- ::31.602 INFO migrate.versioning.api [-] done
-- ::31.602 INFO migrate.versioning.api [-] -> ...
-- ::31.610 INFO migrate.versioning.api [-] done
-- ::31.611 INFO migrate.versioning.api [-] -> ...
-- ::31.619 INFO migrate.versioning.api [-] done
-- ::31.619 INFO migrate.versioning.api [-] -> ...
-- ::31.643 INFO migrate.versioning.api [-] done
-- ::31.644 INFO migrate.versioning.api [-] -> ...
-- ::31.719 INFO migrate.versioning.api [-] done
-- ::31.719 INFO migrate.versioning.api [-] -> ...
-- ::31.852 INFO migrate.versioning.api [-] done
-- ::31.853 INFO migrate.versioning.api [-] -> ...
-- ::31.936 INFO migrate.versioning.api [-] done
-- ::31.936 INFO migrate.versioning.api [-] -> ...
-- ::32.019 INFO migrate.versioning.api [-] done
-- ::32.020 INFO migrate.versioning.api [-] -> ...
-- ::32.120 INFO migrate.versioning.api [-] done
-- ::32.120 INFO migrate.versioning.api [-] -> ...
-- ::32.378 INFO migrate.versioning.api [-] done
-- ::32.378 INFO migrate.versioning.api [-] -> ...
-- ::32.470 INFO migrate.versioning.api [-] done
-- ::32.470 INFO migrate.versioning.api [-] -> ...
-- ::32.662 INFO migrate.versioning.api [-] done
-- ::32.662 INFO migrate.versioning.api [-] -> ...
-- ::32.670 INFO migrate.versioning.api [-] done
-- ::32.670 INFO migrate.versioning.api [-] -> ...
-- ::32.678 INFO migrate.versioning.api [-] done
-- ::32.678 INFO migrate.versioning.api [-] -> ...
-- ::32.686 INFO migrate.versioning.api [-] done
-- ::32.686 INFO migrate.versioning.api [-] -> ...
-- ::32.695 INFO migrate.versioning.api [-] done
-- ::32.695 INFO migrate.versioning.api [-] -> ...
-- ::32.703 INFO migrate.versioning.api [-] done
[root@node0 opt]#

c6. 配置nova /etc/nova/nova.conf

 [cinder]
os_region_name = RegionOne

c7. 重启计算API服务 (费比较长的时间

 systemctl restart openstack-nova-api.service
 /var/log/message:
Feb :: node0 nova-api: -- ::06.869 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.870 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.871 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.872 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.880 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.882 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.883 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.885 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.886 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.886 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.887 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.890 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.891 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.894 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.896 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.896 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.897 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.898 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.899 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.901 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.901 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.902 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.902 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.904 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.904 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.905 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.906 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.909 INFO oslo_service.service [-] Child exited with status
Feb :: node0 nova-api: -- ::06.910 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 nova-api: -- ::06.911 INFO oslo_service.service [-] Child killed by signal
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 systemd: Started Session of user root.
Feb :: node0 systemd: Starting Session of user root.
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 dnsmasq-dhcp[]: DHCPDISCOVER(ns-3bf4d3fc-7e) :b1:1c:2e::b4 no address available
Feb :: node0 nova-scheduler: -- ::33.386 INFO nova.scheduler.host_manager [req-57515c80--43ab-ad68-73f9dc626f5e - - - - -] Successfully synced instances from host 'node1'.
Feb :: node0 systemd: openstack-nova-api.service stop-sigterm timed out. Killing.
Feb :: node0 systemd: openstack-nova-api.service: main process exited, code=killed, status=/KILL
Feb :: node0 systemd: Unit openstack-nova-api.service entered failed state.
Feb :: node0 systemd: openstack-nova-api.service failed.
Feb :: node0 systemd: Starting OpenStack Nova API Server...
Feb :: node0 nova-api: No handlers could be found for logger "oslo_config.cfg"
Feb :: node0 nova-api: -- ::39.872 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
Feb :: node0 nova-api: -- ::40.157 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'servers', 'versions']
Feb :: node0 nova-api: -- ::40.161 WARNING oslo_config.cfg [-] Option "username" from group "keystone_authtoken" is deprecated. Use option "user-name" from group "keystone_authtoken".
Feb :: node0 nova-api: -- ::40.317 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'servers', 'versions']
Feb :: node0 nova-api: -- ::40.479 INFO nova.wsgi [-] osapi_compute listening on 0.0.0.0:
Feb :: node0 nova-api: -- ::40.479 INFO oslo_service.service [-] Starting workers
Feb :: node0 nova-api: -- ::40.482 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.485 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.486 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.489 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.490 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.492 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.493 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.495 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.496 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.500 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.501 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.502 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.503 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.507 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.507 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.509 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.510 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.513 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.514 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.518 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.518 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.521 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.522 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.524 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.525 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.528 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.528 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.531 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.532 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.534 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.535 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.538 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'
Feb :: node0 nova-api: -- ::40.539 INFO nova.osapi_compute.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8774/
Feb :: node0 nova-api: -- ::40.766 INFO nova.wsgi [-] metadata listening on 0.0.0.0:
Feb :: node0 nova-api: -- ::40.767 INFO oslo_service.service [-] Starting workers
Feb :: node0 nova-api: -- ::40.772 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.775 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.777 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.780 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.782 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.785 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.787 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.789 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.792 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.794 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.797 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.799 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.801 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.804 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.806 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.808 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.810 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.813 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.815 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.818 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.820 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.823 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.825 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.828 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.830 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.833 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.835 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.837 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.840 INFO oslo_service.service [-] Started child
Feb :: node0 nova-api: -- ::40.842 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/
Feb :: node0 nova-api: -- ::40.844 INFO oslo_service.service [-] Started child
Feb :: node0 systemd: Started OpenStack Nova API Server.
Feb :: node0 nova-api: -- ::40.847 INFO nova.metadata.wsgi.server [-] () wsgi starting up on http://0.0.0.0:8775/

c8. 启动cinder服务

 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

下面开始在Block节点上部署cinder。

cc1. 安装工具包

 yum install lvm2

 systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

cc2. 创建LVM物理卷及卷组

 pvcreate /dev/sdb

 vgcreate cinder-volumes /dev/sdb

cc3. 配置/etc/lvm/lvm.conf

If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the /dev/sda device contains the operating system:

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the /etc/lvm/lvm.conf file on those nodes to include only the operating system disk. For example, if the /dev/sda device contains the operating system:

filter = [ "a/sda/", "r/.*/"]

我的配置信息如下(这里出现了笔误的,后续会提到错误信息):

         # Example
# Accept every block device:
# filter = [ "a|.*/|" ]
# Reject the cdrom drive:
# filter = [ "r|/dev/cdrom|" ]
# Work with just loopback devices, e.g. for testing:
# filter = [ "a|loop|", "r|.*|" ]
# Accept all loop devices and ide drives except hdc:
# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
# Use anchors to be very specific:
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
#
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
filter = [ "a/sda/", "a/sdb/", "r/.*/"]

cc4. 安装组件

 yum install openstack-cinder targetcli python-oslo-policy

cc5. 配置/etc/cinder/cinder.conf

 [DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip =192.168.1.120
enabled_backends = lvm
glance_host = node0
verbose = True [oslo_concurrency]
lock_path = /var/lib/cinder/tmp [keystone_authtoken]
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = openstack [oslo_messaging_rabbit]
rabbit_host = node0
rabbit_userid = openstack
rabbit_password = openstack [database]
connection = mysql://cinder:openstack@node0/cinder [lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

cc6. 启动服务

 systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

cc7. 验证服务

 source admin-openrc.sh

 [root@node0 opt]# cinder service-list
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | node0 | nova | enabled | up | --24T05::09.000000 | - |
| cinder-volume | node2@lvm | nova | enabled | down | - | - |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
 [root@node0 opt]# cinder-manage service list
No handlers could be found for logger "oslo_config.cfg"
/usr/lib/python2./site-packages/oslo_db/sqlalchemy/enginefacade.py:: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
Binary Host Zone Status State Updated At
cinder-scheduler node0 nova enabled :-) -- ::
cinder-volume node2@lvm nova enabled XXX None
[root@node0 opt]#

发现出了问题,上面的node2没有正常工作,是down的状态,对于这个问题,我首先查看了下ntp是否同步上,就是node2和node0之间,发现是没有问题的!

 [root@node2 ~]# ntpdate node0
Mar :: ntpdate[]: adjust time server 192.168.1.100 offset -0.000001 sec

那这个问题会不会是配置的问题呢?看日志,查看/var/log/cinder/volume.log:

 -- ::30.563  INFO cinder.service [-] Starting cinder-volume node (version 7.0.)
-- ::30.565 INFO cinder.volume.manager [req-d78b5d83-26f9-45d1-96a8-52c422c294e3 - - - - -] Starting volume driver LVMVolumeDriver (3.0.)
-- ::30.690 ERROR cinder.volume.manager [req-d78b5d83-26f9-45d1-96a8-52c422c294e3 - - - - -] Failed to initialize driver.
-- ::30.690 ERROR cinder.volume.manager Traceback (most recent call last):
-- ::30.690 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line , in init_host
-- ::30.690 ERROR cinder.volume.manager self.driver.check_for_setup_error()
-- ::30.690 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line , in wrapper
-- ::30.690 ERROR cinder.volume.manager return f(*args, **kwargs)
-- ::30.690 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line , in check_for_setup_error
-- ::30.690 ERROR cinder.volume.manager lvm_conf=lvm_conf_file)
-- ::30.690 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line , in __init__
-- ::30.690 ERROR cinder.volume.manager if self._vg_exists() is False:
-- ::30.690 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line , in _vg_exists
-- ::30.690 ERROR cinder.volume.manager run_as_root=True)
-- ::30.690 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/utils.py", line , in execute
-- ::30.690 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs)
-- ::30.690 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line , in execute
-- ::30.690 ERROR cinder.volume.manager cmd=sanitized_cmd)
-- ::30.690 ERROR cinder.volume.manager ProcessExecutionError: Unexpected error while running command.
-- ::30.690 ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o name cinder-volumes
-- ::30.690 ERROR cinder.volume.manager Exit code:
-- ::30.690 ERROR cinder.volume.manager Stdout: u''
-- ::30.690 ERROR cinder.volume.manager Stderr: u' Invalid filter pattern "i"a/sda/",".\n'
-- ::30.690 ERROR cinder.volume.manager
-- ::30.756 INFO oslo.messaging._drivers.impl_rabbit [req-8941ff35-6d20-4e5e-97c6-79fdbbfb508d - - - - -] Connecting to AMQP server on node0:
-- ::30.780 INFO oslo.messaging._drivers.impl_rabbit [req-8941ff35-6d20-4e5e-97c6-79fdbbfb508d - - - - -] Connected to AMQP server on node0:
-- ::40.800 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::50.805 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::00.809 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::10.819 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::20.824 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::30.824 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::31.796 WARNING cinder.volume.manager [req-41b39517-4c6e-4e91-a594-488f3d25e68e - - - - -] Update driver status failed: (config name lvm) is uninitialized.
-- ::40.833 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::50.838 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::00.838 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::10.848 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::20.852 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::30.852 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::31.797 WARNING cinder.volume.manager [req-aacbae0b-cce8-4bc9-9c6a-2e3098f34c25 - - - - -] Update driver status failed: (config name lvm) is uninitialized.
-- ::40.861 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::50.866 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::00.866 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
-- ::10.876 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".

果然是配置中有问题,filter里面出现了笔误,多了一个i字符。更正后,再次启动cinder-volume服务。再次verify:

 [root@node0 opt]# cinder service-list
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | node0 | nova | enabled | up | --24T06::49.000000 | - |
| cinder-volume | node2@lvm | nova | enabled | up | --24T06::51.000000 | - |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
[root@node0 opt]#
[root@node0 opt]#
[root@node0 opt]# cinder-manage service list
No handlers could be found for logger "oslo_config.cfg"
/usr/lib/python2./site-packages/oslo_db/sqlalchemy/enginefacade.py:: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
Binary Host Zone Status State Updated At
cinder-scheduler node0 nova enabled :-) -- ::
cinder-volume node2@lvm nova enabled :-) -- ::

一切正常启动了,OK,cinder部署告捷了!

最后部署swift。swift主要包含两部分的部署,和cinder类似,controller和object 节点的部署。 就软件组成来说,swift主要包含proxy-server,account server,container server以及object server几个部分。其中proxy-server原则上可以部署在任何机器节点上,在我这个环境中,我部署在controller节点上了,其他三个,部署在object节点上。

下面,用sx表示在controller节点的部署信息,ssx表示在object节点上的部署信息。

s1. 创建服务(注意,proxy-server安装,不需要创建数据库)

 source admin-openrc.sh

 openstack user create --domain default --password-prompt swift
openstack role add --project service --user swift admin openstack service create --name swift --description "OpenStack Object Storage" object-store openstack endpoint create --region RegionOne object-store public http://node0:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region RegionOne object-store internal http://node0:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region RegionOne object-store admin http://node0:8080/v1/AUTH_%\(tenant_id\)s

s2. 安装组件

 yum install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached

s3. 配置/etc/swift/proxy-server.conf

这里,需要先下载原始版本的配置文件到指定目录下,然后修改配置。

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/libert
 [DEFAULT]
bind_port =
swift_dir = /etc/swift
user = swift # Enables exposing configuration settings via HTTP GET /info.
# expose_info = true # Key to use for admin calls that are HMAC signed. Default is empty,
# which will disable admin calls to /info.
# admin_key = secret_admin_key
#
# Allows the ability to withhold sections from showing up in the public calls
# to /info. You can withhold subsections by separating the dict level with a
# ".". The following would cause the sections 'container_quotas' and 'tempurl'
# to not be listed, and the key max_failed_deletes would be removed from
# bulk_delete. Default value is 'swift.valid_api_versions' which allows all
# registered features to be listed via HTTP GET /info except
# swift.valid_api_versions information
# disallowed_sections = swift.valid_api_versions, container_quotas, tempurl # Use an integer to override the number of pre-forked processes that will
# accept connections. Should default to the number of effective cpu
# cores in the system. It's worth noting that individual workers will
# use many eventlet co-routines to service multiple concurrent requests.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients =
#
# Set the following two lines to enable SSL. This is for testing only.
# cert_file = /etc/swift/proxy.crt
# key_file = /etc/swift/proxy.key
#
# expiring_objects_container_divisor =
# expiring_objects_account_name = expiring_objects
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_headers = false
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to , the default.
# log_max_line_length =
#
# This optional suffix (default is empty) that would be appended to the swift transaction
# id allows one to easily figure out from which cluster that X-Trans-Id belongs to.
# This is very useful when one is managing more than one swift cluster.
# trans_id_suffix =
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port =
#
# You can enable StatsD logging here:
# log_statsd_host = localhost
# log_statsd_port =
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
# cors_allow_origin =
# strict_cors_mode = True
#
# client_timeout =
# eventlet_debug = false [pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server [app:proxy-server]
use = egg:swift#proxy
# You can override the default log routing for this app here:
# set log_name = proxy-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_address = /dev/log
#
# log_handoffs = true
# recheck_account_existence =
# recheck_container_existence =
# object_chunk_size =
# client_chunk_size =
#
# How long the proxy server will wait on responses from the a/c/o servers.
# node_timeout =
#
# How long the proxy server will wait for an initial response and to read a
# chunk of data from the object servers while serving GET / HEAD requests.
# Timeouts from these requests can be recovered from so setting this to
# something lower than node_timeout would provide quicker error recovery
# while allowing for a longer timeout for non-recoverable requests (PUTs).
# Defaults to node_timeout, should be overriden if node_timeout is set to a
# high number to prevent client timeouts from firing before the proxy server
# has a chance to retry.
# recoverable_node_timeout = node_timeout
#
# conn_timeout = 0.5
#
# How long to wait for requests to finish after a quorum has been established.
# post_quorum_timeout = 0.5
#
# How long without an error before a node's error count is reset. This will
# also be how long before a node is reenabled after suppression is triggered.
# error_suppression_interval =
#
# How many errors can accumulate before a node is temporarily ignored.
# error_suppression_limit =
#
# If set to 'true' any authorized user may create and delete accounts; if
# 'false' no one, even authorized, can.
# allow_account_management = false
#
# Set object_post_as_copy = false to turn on fast posts where only the metadata
# changes are stored anew and the original data file is kept in place. This
# makes for quicker posts; but since the container metadata isn't updated in
# this mode, features like container sync won't be able to sync posts.
# object_post_as_copy = true
#
# If set to 'true' authorized accounts that do not yet exist within the Swift
# cluster will be automatically created.
account_autocreate = true
#
# If set to a positive value, trying to create a container when the account
# already has at least this maximum containers will result in a Forbidden.
# Note: This is a soft limit, meaning a user might exceed the cap for
# recheck_account_existence before the 403s kick in.
# max_containers_per_account =
#
# This is a comma separated list of account hashes that ignore the
# max_containers_per_account cap.
# max_containers_whitelist =
#
# Comma separated list of Host headers to which the proxy will deny requests.
# deny_host_headers =
#
# Prefix used when automatically creating accounts.
# auto_create_account_prefix = .
#
# Depth of the proxy put queue.
# put_queue_depth =
#
# Storage nodes can be chosen at random (shuffle), by using timing
# measurements (timing), or by using an explicit match (affinity).
# Using timing measurements may allow for lower overall latency, while
# using affinity allows for finer control. In both the timing and
# affinity cases, equally-sorting nodes are still randomly chosen to
# spread load.
# The valid values for sorting_method are "affinity", "shuffle", and "timing".
# sorting_method = shuffle
#
# If the "timing" sorting_method is used, the timings will only be valid for
# the number of seconds configured by timing_expiry.
# timing_expiry =
#
# The maximum time (seconds) that a large object connection is allowed to last.
# max_large_object_get_time =
#
# Set to the number of nodes to contact for a normal request. You can use
# '* replicas' at the end to have it use the number given times the number of
# replicas for the ring being used for the request.
# request_node_count = * replicas
#
# Which backend servers to prefer on reads. Format is r<N> for region
# N or r<N>z<M> for region N, zone M. The value after the equals is
# the priority; lower numbers are higher priority.
#
# Example: first read from region zone , then region zone , then
# anything in region , then everything else:
# read_affinity = r1z1=, r1z2=, r2=
# Default is empty, meaning no preference.
# read_affinity =
#
# Which backend servers to prefer on writes. Format is r<N> for region
# N or r<N>z<M> for region N, zone M. If this is set, then when
# handling an object PUT request, some number (see setting
# write_affinity_node_count) of local backend servers will be tried
# before any nonlocal ones.
#
# Example: try to write to regions and before writing to any other
# nodes:
# write_affinity = r1, r2
# Default is empty, meaning no preference.
# write_affinity =
#
# The number of local (as governed by the write_affinity setting)
# nodes to attempt to contact first, before any non-local ones. You
# can use '* replicas' at the end to have it use the number given
# times the number of replicas for the ring being used for the
# request.
# write_affinity_node_count = * replicas
#
# These are the headers whose values will only be shown to swift_owners. The
# exact definition of a swift_owner is up to the auth system in use, but
# usually indicates administrative responsibilities.
# swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-, x-container-meta-temp-url-key, x-container-meta-temp-url-key-, x-account-access-control [filter:tempauth]
use = egg:swift#tempauth
# You can override the default log routing for this filter here:
# set log_name = tempauth
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# The reseller prefix will verify a token begins with this prefix before even
# attempting to validate it. Also, with authorization, only Swift storage
# accounts with this prefix will be authorized by this middleware. Useful if
# multiple auth systems are in use for one Swift cluster.
# The reseller_prefix may contain a comma separated list of items. The first
# item is used for the token as mentioned above. If second and subsequent
# items exist, the middleware will handle authorization for an account with
# that prefix. For example, for prefixes "AUTH, SERVICE", a path of
# /v1/SERVICE_account is handled the same as /v1/AUTH_account. If an empty
# (blank) reseller prefix is required, it must be first in the list. Two
# single quote characters indicates an empty (blank) reseller prefix.
# reseller_prefix = AUTH #
# The require_group parameter names a group that must be presented by
# either X-Auth-Token or X-Service-Token. Usually this parameter is
# used only with multiple reseller prefixes (e.g., SERVICE_require_group=blah).
# By default, no group is needed. Do not use .admin.
# require_group = # The auth prefix will cause requests beginning with this prefix to be routed
# to the auth subsystem, for granting tokens, etc.
# auth_prefix = /auth/
# token_life =
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# This specifies what scheme to return with storage urls:
# http, https, or default (chooses based on what the server is running as)
# This can be useful with an SSL load balancer in front of a non-SSL server.
# storage_url_scheme = default
#
# Lastly, you need to list all the accounts/users you want here. The format is:
# user_<account>_<user> = <key> [group] [group] [...] [storage_url]
# or if you want underscores in <account> or <user>, you can base64 encode them
# (with no equal signs) and use this format:
# user64_<account_b64>_<user_b64> = <key> [group] [group] [...] [storage_url]
# There are special groups of:
# .reseller_admin = can do anything to any account for this auth
# .admin = can do anything within the account
# If neither of these groups are specified, the user can only access containers
# that have been explicitly allowed for them by a .admin or .reseller_admin.
# The trailing optional storage_url allows you to specify an alternate url to
# hand back to the user upon authentication. If not specified, this defaults to
# $HOST/v1/<reseller_prefix>_<account> where $HOST will do its best to resolve
# to what the requester would need to use to reach this host.
# Here are example entries, required for running the tests:
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service # To enable Keystone authentication you need to have the auth token
# middleware first to be configured. Here is an example below, please
# refer to the keystone's documentation for details about the
# different settings.
#
# You'll need to have as well the keystoneauth middleware enabled
# and have it in your main pipeline so instead of having tempauth in
# there you can change it to: authtoken keystoneauth
#
[filter:authtoken]
# paste.filter_factory = keystonemiddleware.auth_token:filter_factory
# identity_uri = http://keystonehost:35357/
# auth_uri = http://keystonehost:5000/
# admin_tenant_name = service
# admin_user = swift
# admin_password = password
#
# delay_auth_decision defaults to False, but leaving it as false will
# prevent other auth systems, staticweb, tempurl, formpost, and ACLs from
# working. This value must be explicitly set to True.
# delay_auth_decision = False
#
# cache = swift.cache
# include_service_catalog = False
#
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = openstack
delay_auth_decision = true
auth_protocol = http [filter:keystoneauth]
use = egg:swift#keystoneauth
# The reseller_prefix option lists account namespaces that this middleware is
# responsible for. The prefix is placed before the Keystone project id.
# For example, for project , and prefix AUTH, the account is
# named AUTH_12345678 (i.e., path is /v1/AUTH_12345678/...).
# Several prefixes are allowed by specifying a comma-separated list
# as in: "reseller_prefix = AUTH, SERVICE". The empty string indicates a
# single blank/empty prefix. If an empty prefix is required in a list of
# prefixes, a value of '' (two single quote characters) indicates a
# blank/empty prefix. Except for the blank/empty prefix, an underscore ('_')
# character is appended to the value unless already present.
# reseller_prefix = AUTH
#
# The user must have at least one role named by operator_roles on a
# project in order to create, delete and modify containers and objects
# and to set and read privileged headers such as ACLs.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_operator_roles applies to the /v1/SERVICE_<project>
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
operator_roles = admin, user
#
# The reseller admin role has the ability to create and delete accounts
# reseller_admin_role = ResellerAdmin
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# If is_admin is true, a user whose username is the same as the project name
# and who has any role on the project will have access rights elevated to be
# the same as if the user had an operator role. Note that the condition
# compares names rather than UUIDs. This option is deprecated.
# is_admin = false
#
# If the service_roles parameter is present, an X-Service-Token must be
# present in the request that when validated, grants at least one role listed
# in the parameter. The X-Service-Token may be scoped to any project.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_service_roles applies to the /v1/SERVICE_<project>
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
# By default, no service_roles are required.
# service_roles =
#
# For backwards compatibility, keystoneauth will match names in cross-tenant
# access control lists (ACLs) when both the requesting user and the tenant
# are in the default domain i.e the domain to which existing tenants are
# migrated. The default_domain_id value configured here should be the same as
# the value used during migration of tenants to keystone domains.
# default_domain_id = default
#
# For a new installation, or an installation in which keystone projects may
# move between domains, you should disable backwards compatible name matching
# in ACLs by setting allow_names_in_acls to false:
# allow_names_in_acls = true [filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
# This facility may be used to temporarily remove a Swift node from a load
# balancer pool during maintenance or upgrade (remove the file to allow the
# node back into the load balancer pool).
# disable_path = [filter:cache]
use = egg:swift#memcache
# You can override the default log routing for this filter here:
# set log_name = cache
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# If not set here, the value for memcache_servers will be read from
# memcache.conf (see memcache.conf-sample) or lacking that file, it will
# default to the value below. You can specify multiple servers separated with
# commas, as in: 10.1.2.3:,10.1.2.4:
memcache_servers = 127.0.0.1:
#
# Sets how memcache values are serialized and deserialized:
# = older, insecure pickle serialization
# = json serialization but pickles can still be read (still insecure)
# = json serialization only (secure and the default)
# If not set here, the value for memcache_serialization_support will be read
# from /etc/swift/memcache.conf (see memcache.conf-sample).
# To avoid an instant full cache flush, existing installations should
# upgrade with , then set to and reload, then after some time ( hours)
# set to and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support =
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections =
#
# More options documented in memcache.conf-sample [filter:ratelimit]
use = egg:swift#ratelimit
# You can override the default log routing for this filter here:
# set log_name = ratelimit
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# clock_accuracy should represent how accurate the proxy servers' system clocks
# are with each other. means that all the proxies' clock are accurate to
# each other within millisecond. No ratelimit should be higher than the
# clock accuracy.
# clock_accuracy =
#
# max_sleep_time_seconds =
#
# log_sleep_time_seconds of means disabled
# log_sleep_time_seconds =
#
# allows for slow rates (e.g. running up to sec's behind) to catch up.
# rate_buffer_seconds =
#
# account_ratelimit of means disabled
# account_ratelimit = # DEPRECATED- these will continue to work but will be replaced
# by the X-Account-Sysmeta-Global-Write-Ratelimit flag.
# Please see ratelimiting docs for details.
# these are comma separated lists of account names
# account_whitelist = a,b
# account_blacklist = c,d # with container_limit_x = r
# for containers of size x limit write requests per second to r. The container
# rate will be linearly interpolated from the values given. With the values
# below, a container of size will get a rate of .
# container_ratelimit_0 =
# container_ratelimit_10 =
# container_ratelimit_50 = # Similarly to the above container-level write limits, the following will limit
# container GET (listing) requests.
# container_listing_ratelimit_0 =
# container_listing_ratelimit_10 =
# container_listing_ratelimit_50 = [filter:domain_remap]
use = egg:swift#domain_remap
# You can override the default log routing for this filter here:
# set log_name = domain_remap
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# storage_domain = example.com
# path_root = v1 # Browsers can convert a host header to lowercase, so check that reseller
# prefix on the account is the correct case. This is done by comparing the
# items in the reseller_prefixes config option to the found prefix. If they
# match except for case, the item from reseller_prefixes will be used
# instead of the found reseller prefix. When none match, the default reseller
# prefix is used. When no default reseller prefix is configured, any request
# with an account prefix not in that list will be ignored by this middleware.
# reseller_prefixes = AUTH
# default_reseller_prefix = [filter:catch_errors]
use = egg:swift#catch_errors
# You can override the default log routing for this filter here:
# set log_name = catch_errors
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log [filter:cname_lookup]
# Note: this middleware requires python-dnspython
use = egg:swift#cname_lookup
# You can override the default log routing for this filter here:
# set log_name = cname_lookup
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# Specify the storage_domain that match your cloud, multiple domains
# can be specified separated by a comma
# storage_domain = example.com
#
# lookup_depth = # Note: Put staticweb just after your auth filter(s) in the pipeline
[filter:staticweb]
use = egg:swift#staticweb # Note: Put tempurl before dlo, slo and your auth filter(s) in the pipeline
[filter:tempurl]
use = egg:swift#tempurl
# The methods allowed with Temp URLs.
# methods = GET HEAD PUT POST DELETE
#
# The headers to remove from incoming requests. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. incoming_allow_headers is a list of exceptions to these
# removals.
# incoming_remove_headers = x-timestamp
#
# The headers allowed as exceptions to incoming_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# incoming_allow_headers =
#
# The headers to remove from outgoing responses. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. outgoing_allow_headers is a list of exceptions to these
# removals.
# outgoing_remove_headers = x-object-meta-*
#
# The headers allowed as exceptions to outgoing_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# outgoing_allow_headers = x-object-meta-public-* # Note: Put formpost just before your auth filter(s) in the pipeline
[filter:formpost]
use = egg:swift#formpost # Note: Just needs to be placed before the proxy-server in the pipeline.
[filter:name_check]
use = egg:swift#name_check
# forbidden_chars = '"`<>
# maximum_length =
# forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$ [filter:list-endpoints]
use = egg:swift#list_endpoints
# list_endpoints_path = /endpoints/ [filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port =
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host = localhost
# access_log_statsd_port =
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to , only the first characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# or so characters is unique enough that you can trace/debug
# token usage. Set to to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix =
#
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the <verb> portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
#
# Note: The double proxy-logging in the pipeline is not a mistake. The
# left-most proxy-logging is there to log requests that were handled in
# middleware and never made it through to the right-most middleware (and
# proxy server). Double logging is prevented for normal requests. See
# proxy-logging docs. # Note: Put before both ratelimit and auth in the pipeline.
[filter:bulk]
use = egg:swift#bulk
# max_containers_per_extraction =
# max_failed_extractions =
# max_deletes_per_request =
# max_failed_deletes = # In order to keep a connection active during a potentially long bulk request,
# Swift may return whitespace prepended to the actual response body. This
# whitespace will be yielded no more than every yield_frequency seconds.
# yield_frequency = # Note: The following parameter is used during a bulk delete of objects and
# their container. This would frequently fail because it is very likely
# that all replicated objects have not been deleted by the time the middleware got a
# successful response. It can be configured the number of retries. And the
# number of seconds to wait between each retry will be 1.5**retry # delete_container_retry_count = # Note: Put after auth and staticweb in the pipeline.
[filter:slo]
use = egg:swift#slo
# max_manifest_segments =
# max_manifest_size =
# min_segment_size =
# Start rate-limiting SLO segment serving after the Nth segment of a
# segmented object.
# rate_limit_after_segment =
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. means no rate-limiting.
# rate_limit_segments_per_sec =
#
# Time limit on GET requests (seconds)
# max_get_time = # Note: Put after auth and staticweb in the pipeline.
# If you don't put it in the pipeline, it will be inserted for you.
[filter:dlo]
use = egg:swift#dlo
# Start rate-limiting DLO segment serving after the Nth segment of a
# segmented object.
# rate_limit_after_segment =
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. means no rate-limiting.
# rate_limit_segments_per_sec =
#
# Time limit on GET requests (seconds)
# max_get_time = # Note: Put after auth in the pipeline.
[filter:container-quotas]
use = egg:swift#container_quotas # Note: Put after auth in the pipeline.
[filter:account-quotas]
use = egg:swift#account_quotas [filter:gatekeeper]
use = egg:swift#gatekeeper
# You can override the default log routing for this filter here:
# set log_name = gatekeeper
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log [filter:container_sync]
use = egg:swift#container_sync
# Set this to false if you want to disallow any full url values to be set for
# any new X-Container-Sync-To headers. This will keep any new full urls from
# coming in, but won't change any existing values already in the cluster.
# Updating those will have to be done manually, as knowing what the true realm
# endpoint should be cannot always be guessed.
# allow_full_urls = true
# Set this to specify this clusters //realm/cluster as "current" in /info
# current = //REALM/CLUSTER # Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after catch_errors, gatekeeper and healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file. Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/proxy.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false # Note: Put after slo, dlo in the pipeline.
# If you don't put it in the pipeline, it will be inserted automatically.
[filter:versioned_writes]
use = egg:swift#versioned_writes
# Enables using versioned writes middleware and exposing configuration
# settings via HTTP GET /info.
# WARNING: Setting this option bypasses the "allow_versions" option
# in the container configuration file, which will be eventually
# deprecated. See documentation for more details.
# allow_versioned_writes = false

接下来,配置swift节点:

ss1. 安装组件,基本配置。

 yum install xfsprogs rsync

 mkfs.xfs /dev/sdb
mkdir -p /srv/node/sdb

下面看看我的swift节点的磁盘信息吧:

 [root@node3 opt]# fdisk -l

 Disk /dev/sda: 500.1 GB,  bytes,  sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes
Disk label type: dos
Disk identifier: 0x0005b206 Device Boot Start End Blocks Id System
/dev/sda1 * Linux
/dev/sda2 8e Linux LVM Disk /dev/sdb: 500.1 GB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes Disk /dev/mapper/centos00-swap: 16.9 GB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes Disk /dev/mapper/centos00-root: 53.7 GB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes Disk /dev/mapper/centos00-home: 429.0 GB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes

ss2. 配置/etc/fstab

在这个文件末尾添加下面一行信息:

 /dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=  

ss3. 挂载磁盘

 mount /srv/node/sdb

ss4. 配置/etc/rsyncd.conf,追加下面的信息。

 uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.1.130 [account]
max connections =
path = /srv/node/
read only = false
lock file = /var/lock/account.lock [container]
max connections =
path = /srv/node/
read only = false
lock file = /var/lock/container.lock [object]
max connections =
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

ss5. 启动服务

 systemctl enable rsyncd.service
systemctl start rsyncd.service

ss6. 安装分组件

 yum install openstack-swift-account openstack-swift-container openstack-swift-object

ss7. 配置account,container,object

 curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/liberty
curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/liberty
curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/liberty

/etc/swift/account-server.conf,其他部分信息保持默认值。

 [DEFAULT]
bind_ip = 192.168.1.130
bind_port =
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true [pipeline:main]
pipeline = healthcheck recon account-server [filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

/etc/swift/container-server.conf,其他部分信息保持默认值。

 [DEFAULT]
bind_ip = 192.168.1.130
bind_port =
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true [pipeline:main]
pipeline = healthcheck recon container-server [filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

/etc/swift/object-server.conf,其他部分信息保持默认值。

 [DEFAULT]
bind_ip = 192.168.1.130
bind_port =
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true [pipeline:main]
pipeline = healthcheck recon object-server [filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

ss8.修改属组

 chown -R swift:swift /srv/node

 mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift

下面的操作又回到了controller节点上:

s4. 创建account ring,首先要cd到/etc/swift目录。

 swift-ring-builder account.builder create   

 swift-ring-builder account.builder add --region  --zone  --ip 192.168.1.130 --port  --device sdb --weight 

 swift-ring-builder account.builder rebalance

s5. 创建container ring,首先要cd到/etc/swift目录。

 swift-ring-builder container.builder create   

 swift-ring-builder container.builder add --region  --zone  --ip 192.168.1.130 --port  --device sdb --weight 

 swift-ring-builder container.builder rebalance

s5. 创建object ring,首先要cd到/etc/swift目录。

 swift-ring-builder object.builder create   

 swift-ring-builder object.builder add --region  --zone  --ip 192.168.1.130 --port  --device sdb --weight 

 swift-ring-builder object.builder rebalance

s6. Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

s7. 配置/etc/swift/swift.conf。然后将配置好的文件copy到node3(其他任何运行object节点以及运行了proxy-server的节点上)

 curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/liberty

/etc/swift/swift.conf

 [swift-hash]
swift_hash_path_suffix = shihuc
swift_hash_path_prefix = openstack [storage-policy:]
name = Policy-
default = yes

s8. 修改属组

 chown -R root:swift /etc/swift

s9. 启动服务(controller 节点以及运行了proxy-server的节点)

 systemctl enable openstack-swift-proxy.service memcached.service
systemctl start openstack-swift-proxy.service memcached.service

s10. 启动account,container以及object服务 (node3 object节点)

 systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service   openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

最后,就是验证了。首先添加环境参数(controller节点)

 echo "export OS_AUTH_VERSION=3"  | tee -a admin-openrc.sh demo-openrc.sh
 [root@node0 opt]# swift stat
Account: AUTH_c6669377868c438f8a81cc234f85338f
Containers:
Objects:
Bytes:
Containers in policy "policy-0":
Objects in policy "policy-0":
Bytes in policy "policy-0":
X-Account-Project-Domain-Id: default
X-Timestamp: 1456454203.32398
X-Trans-Id: txaee8904dd48f484bb7534-0056d4f908
Content-Type: text/plain; charset=utf-
Accept-Ranges: bytes

看到上面的信息,说明基本已经成功了。下面来看看传文件,查询等操作,看到下面的内容,说明一切都可以了,成功!

 [root@node0 opt]# swift upload C1 admin-openrc.sh
admin-openrc.sh
[root@node0 opt]#
[root@node0 opt]# swift list
C1
[root@node0 opt]# swift list C1
admin-openrc.sh
[root@node0 opt]#

在这里,需要说明的是,在部署swift的时候遇到的问题!

problem1:

 [root@node0 opt]# swift stat -v
/usr/lib/python2./site-packages/keystoneclient/service_catalog.py:: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7. release and may be removed in the 2.0. release. Either both should be provided or neither should be provided.
'Providing attr without filter_value to get_urls() is '
Account HEAD failed: http://node0:8080/v1/AUTH_c6669377868c438f8a81cc234f85338f 503 Service Unavailable

这个503的错误,通过查询controller节点的/var/log/message信息:

 Feb  :: localhost swift-account-server: Traceback (most recent call last):
Feb :: localhost swift-account-server: File "/usr/bin/swift-account-server", line , in <module>
Feb :: localhost swift-account-server: sys.exit(run_wsgi(conf_file, 'account-server', **options))
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line , in run_wsgi
Feb :: localhost swift-account-server: loadapp(conf_path, global_conf=global_conf)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line , in loadapp
Feb :: localhost swift-account-server: ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line , in loadcontext
Feb :: localhost swift-account-server: global_conf=global_conf)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in loadcontext
Feb :: localhost swift-account-server: global_conf=global_conf)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in _loadconfig
Feb :: localhost swift-account-server: return loader.get_context(object_type, name, global_conf)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line , in get_context
Feb :: localhost swift-account-server: object_type, name=name, global_conf=global_conf)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in get_context
Feb :: localhost swift-account-server: global_additions=global_additions)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in _pipeline_app_context
Feb :: localhost swift-account-server: for name in pipeline[:-]]
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line , in get_context
Feb :: localhost swift-account-server: object_type, name=name, global_conf=global_conf)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in get_context
Feb :: localhost swift-account-server: object_type, name=name)
Feb :: localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in find_config_section
Feb :: localhost swift-account-server: self.filename))
Feb :: localhost swift-account-server: LookupError: No section 'healthcheck' (prefixed by 'filter') found in config /etc/swift/account-server.conf
Feb :: localhost systemd: openstack-swift-account.service: main process exited, code=exited, status=/FAILURE
Feb :: localhost systemd: Unit openstack-swift-account.service entered failed state.
Feb :: localhost systemd: openstack-swift-account.service failed.

发现,没有healthcheck段,这个就需要去检测配置文件了,account-server, container-server, object-server。起初,我的这三个配置文件,获取过程是有问题的。里面的确没有这个healthcheck段,起初我的配置是通过wget -qO target source这种方式获取的,没有注意这个操作指令是错的。对照官方的guide重新配置后,重启account,container以及object的服务,通过systemd-cgls命令查看swift相关的服务都起来了。说明配置没有问题了。再次verify:

 [root@node0 swift]# swift stat -v
/usr/lib/python2./site-packages/keystoneclient/service_catalog.py:: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7. release and may be removed in the 2.0. release. Either both should be provided or neither should be provided.
'Providing attr without filter_value to get_urls() is '
StorageURL: http://node0:8080/v1/AUTH_c6669377868c438f8a81cc234f85338f
Auth Token: 98bd7931a5834f6dba424dcab9a14d3a
Account: AUTH_c6669377868c438f8a81cc234f85338f
Containers:
Objects:
Bytes:
X-Put-Timestamp: 1456453671.08880
X-Timestamp: 1456453671.08880
X-Trans-Id: tx952ca1fe02b94b35b11d4-0056cfb826
Content-Type: text/plain; charset=utf-
[root@node0 swift]#

problem2:

 /usr/lib/python2./site-packages/keystoneclient/service_catalog.py:: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7. release and may be removed in the 2.0. release. Either both should be provided or neither should be provided.
'Providing attr without filter_value to get_urls() is '

针对上面swift指令执行后,出现的下面的warning信息的解决办法,很简单,就是要给swift指令提供region信息。我在admin-openrc.sh中添加了export OS_REGION_NAME=RegionOne,然后source admin-openrc.sh之后,再次执行swift的指令,就不再报这个warning了。

最后上传一个swift相关的dashboard的截图,作为结束!

openstack(liberty):部署实验平台(三,简单版本软件安装 之cinder,swift)的更多相关文章

  1. openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

    继续前面的part1,将后续的compute以及network部分的安装过程记录完毕! 首先说说compute部分nova的安装. n1.准备工作.创建数据库,配置权限!(密码依旧是openstack ...

  2. openstack(liberty):部署实验平台(二,简单版本软件安装 part1)

    软件安装过程中,考虑到现在是一个实验环境,且也考虑到规模不大,还有,网络压力不会大,出于简单考虑,将各个节点的拓扑结构改了一下,主要体现在网络节点和控制节点并在了一起.在一个服务器上安装! 到目前位置 ...

  3. openstack(liberty):部署实验平台(一,基础网络环境搭建)

    openstack项目的研究,到今天,算是要进入真实环境了,要部署实验平台了.不再用devstack了.也就是说,要独立controller,compute,storage和network了.要做这个 ...

  4. ArcGIS for Desktop入门教程_第三章_Desktop软件安装 - ArcGIS知乎-新一代ArcGIS问答社区

    原文:ArcGIS for Desktop入门教程_第三章_Desktop软件安装 - ArcGIS知乎-新一代ArcGIS问答社区 1 软件安装 1.1 安装前准备 请确认已经收到来自Esri中国( ...

  5. 开源的PaaS方案:在OpenStack上部署CloudFoundry (三)部署BOSH

    BOSH是CloudFoundry提供的用来安装部署和升级CloudFoundry的自动化工具,可是说是CloudFoundry的一部分.总体来说,BOSH是Client/Server结构, BOSH ...

  6. Java实验项目三——简单工厂模式

    Program: 请采用采用简单工厂设计模式,为某个汽车销售店设计汽车销售系统,接口car至少有方法print(), 三个汽车类:宝马.奥迪.大众 (属性:品牌,价格),在测试类中根据客户要求购买的汽 ...

  7. 渗透测试平台bwapp简单介绍及安装

    先来介绍一下bwapp bwapp是一款非常好用的漏洞演示平台,包含有100多个漏洞 SQL, HTML, iFrame, SSI, OS Command, XML, XPath, LDAP, PHP ...

  8. CentOS7.4安装部署openstack [Liberty版] (一)

    一.OpenStack简介 OpenStack是一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件和开放源代码项目. OpenStack是一个 ...

  9. [译] OpenStack Liberty 版本中的53个新变化

    一个新的秋季,一个新的OpenStack 版本.OpenStack 的第12个版本,Liberty,在10月15日如期交付,而且目前发行版本已经备好了.那么我们期望能从过去六个月时间的开发中获得些什么 ...

随机推荐

  1. magento产品成功添加到购物车后跳转到不同页面 添加 add to cart 按钮

    1 添加产品到购物车成功后是跳转到购物车页面或不跳转.这个在后台可以设置 system -> configuration -> After Adding a Product Redirec ...

  2. error: failed to initialize alpm library

    这个问题出在archlinux上面 [root@sarch pacman]# pacman -Syuerror: failed to initialize alpm library(database ...

  3. leetcode 150. Evaluate Reverse Polish Notation ------ java

    Evaluate the value of an arithmetic expression in Reverse Polish Notation. Valid operators are +, -, ...

  4. php parse_url 函数教程

    [导读] php parse_url 函数教程parse_url ( PHP 4中, PHP 5中) parse_url -解析URL并返回其组成部分 描述 混合parse_url (字符串$网址[摘 ...

  5. jQuery之Deferred对象的使用

    详见:http://www.imooc.com/code/8907 JavaScript的执行流程是分为"同步"与"异步" 传统的异步操作会在操作完成之后,使用 ...

  6. c3p0操作MySQL数据库

    使用c3p0连接MySQL数据库并对MySQL数据库进行基本操作.     1. [文件] 数据库准备 ~ 226B     下载(64) ? 1 2 3 4 5 6 7 8 9 10 ##创建数据库 ...

  7. Scala vs. Groovy vs. Clojure

    http://stackoverflow.com/questions/1314732/scala-vs-groovy-vs-clojure Groovy is a dynamically typed ...

  8. docker学习3-虚拟网络模式

    一.虚拟机网络模式 在理解docker网络隔离前,先看下之前虚拟机里对网络的处理,VirtualBox中有4中网络连接方式: NAT Bridged Adapter Internal Host-onl ...

  9. Android中突发情况Activity数据的保存和恢复

    Android中突发情况Activity数据的保存和恢复 写在前面:在我们的APP使用的过程中,总有可能出现各种手滑.被压在后台.甚至突然被杀死的情况.所以对APP中一些临时数据或关键持久型数据,就需 ...

  10. Android Studio导入GitHub上的项目常见问题(有例子)

    前言:github对开发者而言无疑是个宝藏,但想利用它可不是件简单的事,用Android studio导入开源项目会遇到各种问题,今天我就以github上的一个图片轮播项目为例,解决导入过程中的常见问 ...