第一篇 集群概述
keepalived + haproxy +Rabbitmq集群+MariaDB Galera高可用集群
 
部署openstack时使用单个控制节点是非常危险的,这样就意味着单个节点所在的服务器一旦拓机了,整个openstack就废掉了
 
所有我们基本上需要部署2个以上的控制节点,这里我们的方案是在前端使用keepalive + haproxy,这样openstack计算节点访问数据库和控制节点都是通过VIP反向代理到后端的。
数据库我们使用了MariaDB Galera高可用集群,而openstack控制节点我们除了rabbitmq做了集群,其他2个控制节点其实配置都基本一样
 
结构图如下:
 
 
 
openstack控制节点+mariadb galera集群:10.1.36.21,10.1.36.22 ,10.1.36.23
3个计算节点:10.1.36.24,10.1.36.25,10.1.36.26
keepalive+haproxy: 10.1.36.16,10.1.36.17  VIP:10.1.36.28
ceph集群:mon 10.1.36.11, 10.1.36.12 , 10.1.36.13
                   osd 10.1.36.11, 10.1.36.12 , 10.1.36.13, 10.1.36.14
 
环境准备
 

 
系统优化
 
echo '* - nofile 65535' >> /etc/security/limits.conf
ulimit -SHn 65535
 
cat >>/etc/sysctl.conf  <<EOF
kernel.sysrq = 0
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.somaxconn = 262144
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.ip_forward = 0
net.ipv4.ip_local_port_range = 5000 65000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 30
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_no_metrics_save=1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_sack = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096 16384 16777216
fs.file-max=65536
fs.inotify.max_queued_events=99999999
fs.inotify.max_user_watches=99999999
fs.inotify.max_user_instances=65535
net.core.default_qdisc=fq
vm.overcommit_memory=1
EOF
 
sysctl -p
 
 
openstack集群种keepalive+haproxy的相关配置
 
常见软件包安装
yum install net-tools vim -y
 
修改默认网卡为eth0
vim /etc/default/grub
查找
GRUB_CMDLINE_LINUX=""
修改为
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rhgb quiet net.ifnames=0 biosdevname=0"
重新生成grub引导配置文件
grub2-mkconfig -o /boot/grub/grub.cfg
最后重启即可恢复默认网卡为eth0,操作结束。
 
1.开启转发和监听本地不存在的IP功能
echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
echo "net.ipv4.ip_nonlocal_bind = 1" >>/etc/sysctl.conf
sysctl -p
2.关闭防火墙
systemctl stop iptables
3.关闭selinux
setenforce 0
sed -i 's/^SELINUX=/SELINUX=disabled/g' /etc/sysconfig/selinux
 
 
安装keepalived
Keepalived是一个免费开源的,用C编写的类似于layer3, 4 & 7交换机制软件,具备我们平时说的第3层、第4层和第7层交换机的功能。主要提供loadbalancing(负载均衡)和 high-availability(高可用)功能,负载均衡实现需要依赖Linux的虚拟服务内核模块(ipvs),而高可用是通过VRRP协议实现多台机器之间的故障转移服务。
 
yum install keepalived -y
 
keepalvied配置
 
MASTER keepalived1 10.1.36.16
 
[root@keepalived1 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
    router_id lb-backup-10.1.36.16
}
 
vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -60
}
 
vrrp_instance VI_openstack-master {
    state MASTER
    priority 120
    unicast_src_ip 10.1.36.16
    unicast_peer {
        10.1.36.17
    }
    dont_track_primary
    interface eth0
    virtual_router_id 36
    advert_int 3
    authentication {
        auth_type PASS
        auth_pass 6ec1960a
    }
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        10.1.36.28
    }
}
 
BACKUP keepalived2 10.1.36.17
 
[root@keepalived2 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
    router_id lb-backup-10.1.36.17
}
 
vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -60
}
 
 
vrrp_instance VI_openstack-master {
    state BACKUP
    priority 63
    unicast_src_ip 10.1.36.17
    unicast_peer {
        10.1.36.16
    }
    dont_track_primary
    interface eth0
    virtual_router_id 36
    advert_int 3
    authentication {
        auth_type PASS
        auth_pass 6ec1960a
    }
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        10.1.36.28
    }
}
配置完成后主副节点都启动keepalived服务
systemctl start keepalived
systemctl enable keepalived
 
注1:killall -0 haproxy是检查haproxy进程是否存在,存在返回0。所以检测的逻辑是:每秒钟执行一次script "killall -0 haproxy",如果返回码为0,则不执行下面的weight -60,否则执行。自己手动执行一下killall -0 haproxy,发现killall命令竟然不存在。
killall命令不存在,那么killall命令肯定执行不成功,返回非0值了。于是就明白为什么node2权重会异常的减60了。安装上killall所需的包,问题就解决了。
安装killall所需包的命令如下:
yum install psmisc -y
 
注2:开启转发和监听本地不存在的IP功能
echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
echo "net.ipv4.ip_nonlocal_bind = 1" >>/etc/sysctl.conf
sysctl -p
 
安装Haproxy
 
HAProxy是一种免费,非常快速且可靠的解决方案,为基于TCP和HTTP的应用程序提供 高可用性, 负载平衡和代理。它特别适合于流量非常高的网站,并为世界上许多访问量最大的网站提供支持。多年来,它已成为事实上的标准开源负载平衡器,现在随大多数主流Linux发行版一起提供,并且通常默认情况下部署在云平台中。由于它不会自行宣传,因此我们仅在管理员报告它时才知道它是否使用:-)
它的操作模式使其非常容易且无风险地集成到现有体系结构中,同时仍提供了不将脆弱的Web服务器暴露在网上的可能性,如下所示:
 
 
 
 
yum安装haproxy
 
yum install haproxy -y
 
编译安装
yum -y install gcc gcc-c++ autoconf automake zlib zlib-devel  openssl openssl-devel pcre-devel systemd-devel
wget http://www.haproxy.org/download/1.8/src/haproxy-1.8.23.tar.gz
tar -xvf haproxy-1.8.23.tar.gz
cd haproxy-1.8.23/
make ARCH=x86_64 TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1  USE_SYSTEMD=1 USE_CPU_AFFINITY=1 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy
cp haproxy /usr/sbin/
注:make时报错,可以使用make clean清除上次make的残留文件
创建启动脚本:
# cat /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxyLoad Balancer
After=syslog.targetnetwork.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
创建目录和用户:
mkdir /etc/haproxy
useradd haproxy -s /sbin/nologin
mkdir /var/lib/haproxy
chown haproxy.haproxy /var/lib/haproxy/ -R
 
 
haproxy配置
# cat /etc/haproxy/haproxy.cfg
global
    maxconn     4096
    log         127.0.0.1 local3 info
    user        haproxy
    group       haproxy
    chroot      /var/lib/haproxy
    nbproc      4
    daemon
 
defaults
    log global
    option          dontlognull
    option          nolinger
    option          http_proxy
    mode http
    retries 3
    timeout connect 5000
    timeout client  1m
    timeout server  1m
    timeout queue 1m
    timeout connect 10s
    timeout check 10s
    timeout tunnel  12h
    balance roundrobin
 
 
listen admin_stats
   bind 0.0.0.0:1080
   mode http
   maxconn 10
   stats enable
   stats refresh 30s
   stats uri /haproxy_status
   stats auth admin:04aea9de5f79
   stats hide-version
 
listen mariadb
    bind 0.0.0.0:3306
    mode tcp
    timeout client 3600s
    timeout server 3600s
    server controller1 10.1.36.21:3306 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.21:3306 check inter 2000 fall 3 rise 5
    server controller3 10.1.36.23:3306 check inter 2000 fall 3 rise 5
 
listen memcache
    bind 0.0.0.0:11211
    mode tcp
    timeout client 3600s
    timeout server 3600s
    server controller1 10.1.36.21:11211 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:11211 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:11211 check inter 2000 fall 3 rise 5 backup
 
listen openstack_rabbit
    bind 0.0.0.0:5672
    mode tcp
    server controller1 10.1.36.21:5672 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:5672 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:5672 check inter 2000 fall 3 rise 5 backup
 
listen rabbitmq_management
    bind 0.0.0.0:15672
    mode tcp
    server controller1 10.1.36.21:15672 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:15672 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:15672 check inter 2000 fall 3 rise 5 backup
 
listen keystone_internal
    bind 0.0.0.0:5000
    balance  source
    mode tcp
    option  tcplog
    server controller1 10.1.36.21:5000 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:5000 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:5000 check inter 2000 fall 3 rise 5 backup
 
listen glance_api
    bind 0.0.0.0:9292
    balance  source
    timeout client 6h
    timeout server 6h
    option  tcplog
    server controller1 10.1.36.21:9292 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:9292 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:9292 check inter 2000 fall 3 rise 5 backup
 
listen nova_novncproxy
    bind 0.0.0.0:6080
    balance  source
    option  tcplog
    server controller1 10.1.36.21:6080 check inter 2000 fall 3 rise 5 
    server controller2 10.1.36.22:6080 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:6080 check inter 2000 fall 3 rise 5 backup
 
listen nova_api
    bind 0.0.0.0:8774
    balance  source
    mode tcp
    option  tcplog
    server controller1 10.1.36.21:8774 check inter 2000 fall 3 rise 5 
    server controller2 10.1.36.22:8774 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:8774 check inter 2000 fall 3 rise 5 backup
 
listen nova_metadata
    bind 0.0.0.0:8775
    balance  source
    mode tcp
    server controller1 10.1.36.21:8775 check inter 2000 fall 3 rise 5 
    server controller2 10.1.36.22:8775 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:8775 check inter 2000 fall 3 rise 5 backup
 
listen placement_api
    bind 0.0.0.0:8778
    balance  source
    mode tcp
    server controller1 10.1.36.21:8778 check inter 2000 fall 3 rise 5 
    server controller2 10.1.36.22:8778 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:8778 check inter 2000 fall 3 rise 5 backup
 
listen neutron_server
    bind 0.0.0.0:9696
    balance  source
    mode tcp
    server controller1 10.1.36.21:9696 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:9696 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:9696 check inter 2000 fall 3 rise 5 backup
 
listen horizon
    bind 0.0.0.0:80
    balance  source
    option  httpchk
    option  tcplog
    server controller1 10.1.36.21:80 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:80 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:80 check inter 2000 fall 3 rise 5 backup
 
listen cinder_api
    bind 0.0.0.0:8776
    balance  source
    mode tcp
    option  tcplog
    server controller1 10.1.36.21:8776 check inter 2000 fall 3 rise 5
    server controller2 10.1.36.22:8776 check inter 2000 fall 3 rise 5 backup
    server controller3 10.1.36.23:8776 check inter 2000 fall 3 rise 5 backup
 
 
启动haproxy
systemctl start haproxy
systemctl enable haproxy
 
检验VIP是否已经在keepalive主节点上以及haproxy是否代理相应的openstack端口
[root@keepalived1 ~]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:a0:d1:e9:e3:ac brd ff:ff:ff:ff:ff:ff
    inet 10.1.36.16/16 brd 10.1.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.1.36.28/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2a0:d1ff:fee9:e3ac/64 scope link
       valid_lft forever preferred_lft forever
 
[root@keepalived1 ~]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:9191            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:8776            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:5672            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:8778            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:11211           0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      5108/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:1080            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      5478/master         
tcp        0      0 0.0.0.0:35357           0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      5738/haproxy        
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      5738/haproxy        
tcp6       0      0 :::22                   :::*                    LISTEN      5108/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      5478/master  
 
 
部署openstack使用的ceph存储集群
 
使用ceph-deploy工具部署ceph
官方中文文档: http://docs.ceph.org.cn/
环境
10.1.36.11 192.168.36.11 ceph-host-01
10.1.36.12 192.168.36.12 ceph-host-02
10.1.36.13 192.168.36.13 ceph-host-03
10.1.36.14 192.168.36.14 ceph-host-04
 
ceph网络架构
 
系统:CentOS7.6
 
ceph集群节点系统这里采用了centos7.6 64位。总共5台ceph节点机,每台节点机启动2个osd角色,每个osd对应一块物理磁盘。
 
 
关闭selinux和防火墙
setenforce 0
sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
 
常用软件包安装
yum install vim wget deltarpm -y
 
更换centos源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
 
提前安装好epel源
yum install epel-release -y
注:使用阿里的epel源会使安装变快点
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
 
1.安装ceph-deloy
 
1.1配置主机名,配置host文件,本例ceph-deploy安装在其中一个节点上。
[root@controller1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.36.21 controller1
10.1.36.22 controller2
10.1.36.23 controller3
10.1.36.24 compute1
10.1.36.25 compute2
10.1.36.26 compute3
10.1.36.27 compute4
10.1.36.11  ceph-host-01
10.1.36.12  ceph-host-02
10.1.36.13  ceph-host-03
10.1.36.14  ceph-host-04
 
注:主机名一定要于/etc/hosts中的一致
 
2.2使用ssh-keygen生成key,并用ssh-copy-id复制key到各节点机。
[root@ceph-host-01 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:iVPfxuQVphRA8v2//XsM+PxzWjYrx5JnnHTbBdNYwTw root@controller1
The key's randomart image is:
+---[RSA 2048]----+
|        ..o.o.=..|
|         o o o E.|
|        . . + .+.|
|       o o = o+ .|
|      o S . =..o |
|       .   .. .oo|
|             o=+X|
|             +o%X|
|              B*X|
+----[SHA256]-----+
 
以将key复制到ceph-host-02为例
[root@ceph-host-01 ~]# ssh-copy-id ceph-host-02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'controller2 (10.30.1.222)' can't be established.
ECDSA key fingerprint is SHA256:VsMfdmYFzxV1dxKZi2OSp8QluRVQ1m2lT98cJt4nAFU.
ECDSA key fingerprint is MD5:de:07:2f:5c:13:9b:ba:0b:e5:0e:c2:db:3e:b8:ab:bd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@controller2's password:
 
Number of key(s) added: 1
 
Now try logging into the machine, with:   "ssh 'ceph-host-02'"
and check to make sure that only the key(s) you wanted were added.
 
1.3安装ceph-deploy.
 
安装前我们配置下yum源,这里使用的是较新的nautilus版本
[root@ceph-host-01 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
enabled=1
gpgcheck=1
type=rpm-md
 
[Ceph-noarch]
name=Ceph noarch packages
enabled=1
gpgcheck=1
type=rpm-md
 
[ceph-source]
name=Ceph source packages
enabled=1
gpgcheck=1
type=rpm-md
 
 
[root@ceph-host-01 ~]# yum install ceph-deploy  python-setuptools python2-subprocess32 -y
 
2.创建ceph monitor角色
2.1在使用ceph-deploy部署的过程中会产生一些配置文件,建议先创建一个目录,例如cpeh-cluster
 
 
[root@ceph-host-01 ~]# mkdir -pv ceph-cluster
[root@ceph-host-01 ~]# cd ceph-cluster
 
2.2初始化mon节点,准备创建集群:
[root@ceph-host-01 ceph-cluster]#ceph-deploy new  controller1 controller2 controller3
更改生成的 ceph 集群配置文件
[root@ceph-host-01 ceph-cluster]#cat ceph.conf
[global]
fsid = b071b40f-44e4-4a25-bdb3-8b654e4a429a
mon_initial_members = ceph-host-01, ceph-host-02, ceph-host-03
mon_host = 10.1.36.11,10.1.36.12,10.1.36.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon clock drift allowed = 2
mon clock drift warn backoff = 30
 
public_network = 10.1.36.0/24
cluster_network = 192.168.36.0/24
 
max_open_files = 131072
mon_pg_warn_max_per_osd = 1000
mon_max_pg_per_osd = 1000
osd pool default pg num = 256
osd pool default pgp num = 256
osd pool default size = 3
osd pool default min size = 1
 
mon_osd_full_ratio = .90
mon_osd_nearfull_ratio = .80
osd_deep_scrub_randomize_ratio = 0.01
 
[mon]
mon_allow_pool_delete = true
mon_osd_down_out_interval = 600
mon_osd_min_down_reporters = 3
[mgr]
mgr modules = dashboard
[osd]
osd_journal_size = 20480
osd_max_write_size = 1024
osd mkfs type = xfs
osd_recovery_op_priority = 1
osd_recovery_max_active = 1
osd_recovery_max_single_start = 1
osd_recovery_threads = 1
osd_recovery_max_chunk = 1048576
osd_max_backfills = 1
osd_scrub_begin_hour = 22
osd_scrub_end_hour = 7
osd_recovery_sleep = 0
 
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true
rbd_concurrent_management_ops = 10
rbd_cache_size = 67108864
rbd_cache_max_dirty = 50331648
rbd_cache_target_dirty = 33554432
rbd_cache_max_dirty_age = 2
rbd_default_format = 2
 
注:以上是经过考虑后的优化配置,生产环境对配置进行增删后谨慎使用
 
2.3所有节点安装ceph程序
使用ceph-deploy来安装ceph程序,也可以单独到每个节点上手动安装ceph,根据配置的yum源不同,会安装不同版本的ceph
[root@ceph-host-01 ceph-cluster]#ceph-deploy install  --no-adjust-repos ceph-host-01 ceph-host-02 ceph-host-03 ceph-host-04
# 不加--no-adjust-repos 会一直使用ceph-deploy提供的默认的源,很坑
 
提示:若需要在集群各节点独立安装ceph程序包,其方法如下:
# yum install ceph ceph-radosgw -y
2.4配置初始mon节点,并收集所有密钥
[root@ceph-host-01 ceph-cluster]# ceph-deploy mon create-initial 
 
2.5查看启动服务
# ps -ef|grep ceph
ceph        1916       1  0 12:05 ?        00:00:03 /usr/bin/ceph-mon -f --cluster ceph --id controller1 --setuser ceph --setgroup ceph
 
2.6在管理节点把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点
 
[root@ceph-host-01 ceph-cluster]# ceph-deploy admin ceph-host-01 ceph-host-02 ceph-host-03 ceph-host-04
 
在每个节点上赋予 ceph.client.admin.keyring 有操作权限
# chmod +r /etc/ceph/ceph.client.admin.keyring
 
3.创建ceph osd角色(osd部署)
 
新版ceph-deploy直接使用create
相当于prepare,activate,osd create --bluestore
ceph-deploy osd create --data /dev/sdb ceph-host-01
ceph-deploy osd create --data /dev/sdb ceph-host-02
ceph-deploy osd create --data /dev/sdb ceph-host-03
ceph-deploy osd create --data /dev/sdb ceph-host-04
 
注:如果磁盘已经有数据一定要擦除,示范命令如下
ceph-deploy disk zap controller1 /dev/sdb
 
4.创建mgr角色
自从ceph 12开始,manager是必须的。应该为每个运行monitor的机器添加一个mgr,否则集群处于WARN状态。
 
[root@ceph-host-01 ceph-cluster]# ceph-deploy mgr create ceph-host-01 ceph-host-02 ceph-host-03 
 
5.查看集群健康状态
[root@ceph-host-01 ~]# ceph health
HEALTH_OK
[root@controller3 ~]# ceph -s
  cluster:
    id:     02e63c58-5200-45c9-b592-07624f4893a5
    health: HEALTH_OK
  services:
    mon: 3 daemons, quorum ceph-host-01,ceph-host-02,ceph-host-03 (age 59m)
    mgr: ceph-host-01(active, since 4m), standbys: cceph-host-02,ceph-host-03
    osd: 4 osds: 4 up (since 87m), 4 in (since 87m)
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   5.0 GiB used, 90 GiB / 95 GiB avail
    pgs:     
 
再添加osd
ceph-deploy osd create --data /dev/sdc ceph-host-01
ceph-deploy osd create --data /dev/sdc ceph-host-02
ceph-deploy osd create --data /dev/sdc ceph-host-03
ceph-deploy osd create --data /dev/sdc ceph-host-04
 
5.创建和删除ceph存储池
 
# 这里volumes池是永久性存储,vms是实例临时后端存储,images是镜像存储
创建volumes池,对应Cinder服务
# ceph osd pool create volumes 128 128 
创建images池,对应Glance服务
# ceph osd pool create images 128 128 
创建vms池,对应Nova服务
# ceph osd pool create vms 128 128 
 
新创建的池必须在使用之前进行初始化。使用该rbd工具初始化池:
rbd pool init volumes
rbd pool init images
rbd pool init vms
 
使得存储池可以被rbd使用
ceph osd pool application enable images rbd   
ceph osd pool application enable vms rbd   
ceph osd pool application enable volumes rbd
 
ceph osd pool set images size 3
ceph osd pool set vms size 3
ceph osd pool set volumes size 3
 
安装Ceph客户端
# glance-api服务所在节点需要安装python-rbd;
# 这里glance-api服务运行在2个控制节点,以node1节点为例
[root@controller1 ~]# yum install python-rbd -y
# cinder-volume与nova-compute服务所在节点需要安装ceph-common;
# 这里cinder-volume与nova-compute服务运行在2个计算和一个cinder节点,以cinder节点node5节点为例
[root@compute1 ~]# yum install ceph-common -y
3. 授权设置
1)创建用户
# ceph默认启用cephx authentication(见ceph.conf),需要为nova/cinder与glance客户端创建新的用户并授权;
# 可在管理节点上分别为运行cinder-volume与glance-api服务的节点创建client.glance与client.cinder用户并设置权限;
# 针对pool设置权限,pool名对应创建的pool
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQDEQbZeLAp1KBAAVDpmyw2KqOij/LgD8bQrJQ==
 
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
        key = AQCbbLZeBRJUHxAAf8BWICFMfI38d71jIgan2A==
 
2)推送client.glance秘钥
安装完ceph包之后,需要将ceph集群的ceph.conf和密钥copy到所有client端。
如果在Ceph的配置中打开了auth认证,就需要做如下的操作;如果Ceph中的auth都是设置的none,也就是关闭的话,可以不做如下操作。
# 将创建client.glance用户生成的秘钥推送到运行glance-api服务的节点
[root@ceph-host-01 ceph-cluster]#  ceph auth get-or-create client.glance | ssh controller1 sudo tee /etc/ceph/ceph.client.glance.keyring
[root@ceph-host-01 ceph-cluster]#  ceph auth get-or-create client.glance | ssh controller2 sudo tee /etc/ceph/ceph.client.glance.keyring
[root@ceph-host-01 ceph-cluster]#  ceph auth get-or-create client.glance | ssh controller3 sudo tee /etc/ceph/ceph.client.glance.keyring
# 同时修改秘钥文件的属主与用户组
[root@controller1 ceph-cluster]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@controller1 ceph-cluster]# ssh root@controller2 chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@controller1 ceph-cluster]# ssh root@controller3 chown glance:glance /etc/ceph/ceph.client.glance.keyring
3)推送client.cinder秘钥(nova-volume)
# 将创建client.cinder用户生成的秘钥推送到运行cinder-volume服务的节点
[root@controller1 ceph-cluster]# ceph auth get-or-create client.cinder | ssh controller1 sudo tee /etc/ceph/ceph.client.cinder.keyring
# 同时修改秘钥文件的属主与用户组
[root@ceph-host-01 ceph-cluster]# ssh root@controller1 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
4)推送client.nova秘钥(nova-compute)
# 将创建client.nova用户生成的秘钥推送到运行nova-compute服务的节点
[root@controller1 ceph-cluster]# ceph auth get-or-create client.cinder | ssh compute1 sudo tee /etc/ceph/ceph.client.cinder.keyring
[root@controller1 ceph-cluster]# ceph auth get-or-create client.cinder | ssh compute2 sudo tee /etc/ceph/ceph.client.cinder.keyring
 
5)libvirt秘钥
nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群;
# 在ceph的admin节点向计算节点推送client.cinder秘钥文件,生成的文件是临时性的,将秘钥添加到libvirt后可删除
# 在计算节点将秘钥加入libvirt,以node3节点为例;
# 首先生成1个uuid,全部计算和cinder节点可共用此uuid(其他节点不用操作此步);
# uuid后续配置nova.conf文件时也会用到,请保持一致
[root@ceph-host-01 ceph-cluster]# uuidgen
29355b97-1fd8-4135-a26e-d7efeaa27b0a
# 在libvirt上添加秘钥
[root@compute1 ~]# cd /etc/ceph
[root@compute1 ceph]# touch secret.xml
[root@compute1 ceph]# vim secret.xml
<secret ephemeral='no' private='no'>
     <uuid>29355b97-1fd8-4135-a26e-d7efeaa27b0a</uuid>
     <usage type='ceph'>
         <name>client.cinder secret</name>
     </usage>
</secret>
[root@compute1 ceph]# virsh secret-define --file secret.xml
[root@compute1 ceph]#virsh secret-set-value --secret 29355b97-1fd8-4135-a26e-d7efeaa27b0a --base64 AQCbbLZeBRJUHxAAf8BWICFMfI38d71jIgan2A==
注: --base64 后面的是/etc/ceph/ceph.client.cinder.keyring里的key
 
CentOS 7.6结合openstack  stein版本详解(虽然我的生产环境是stein版本,但是小问题比较多,推荐使用Train版本)
 
 
 
 
 
OpenStack实例创建流程
实例创建流程如下图所示:
1. 通过登录界面dashboard或命令行CLI通过RESTful API向keystone获取认证信息。
2. keystone通过用户请求认证信息,并生成auth-token返回给对应的认证请求。
3. 然后携带auth-token通过RESTful API向nova-api发送一个boot instance的请求。
4. nova-api接受请求后向keystone发送认证请求,查看token是否为有效用户和token。
5. keystone验证token是否有效,将结果返回给nova-api。
6. 通过认证后nova-api和数据库通讯,初始化新建虚拟机的数据库记录。
7. nova-api调用rabbitmq,向nova-scheduler请求是否有创建虚拟机的资源(node主机)。
8. nova-scheduler进程侦听消息队列,获取nova-api的请求。
9. nova-scheduler通过查询nova数据库中计算资源的情况,并通过调度算法计算符合虚拟机创建需要的主机。
10. 对于有符合虚拟机创建的主机,nova-scheduler更新数据库中虚拟机对应的物理主机信息。
11. nova-scheduler通过rpc调用向nova-compute发送对应的创建虚拟机请求的消息。
12. nova-compute会从对应的消息队列中获取创建虚拟机请求的消息。
13. nova-compute通过rpc调用向nova-conductor请求获取虚拟机消息。(Flavor)
14. nova-conductor从消息队队列中拿到nova-compute请求消息。
15. nova-conductor根据消息查询虚拟机对应的信息。
16. nova-conductor从数据库中获得虚拟机对应信息。
17. nova-conductor把虚拟机信息通过消息的方式发送到消息队列中。
18. nova-compute从对应的消息队列中获取虚拟机信息消息。
19. nova-compute请求glance-api获取创建虚拟机所需要镜像。
20. glance-api向keystone认证token是否有效,并返回验证结果。
21. token验证通过,nova-compute获得虚拟机镜像信息(URL)。
22. nova-compute请求neutron-server获取创建虚拟机所需要的网络信息。
23. neutron-server向keystone认证token是否有效,并返回验证结果。
24. token验证通过,nova-compute获得虚拟机网络信息。
25. nova-compute请求cinder-api获取创建虚拟机所需要的持久化存储信息。
26. cinder-api向keystone认证token是否有效,并返回验证结果。
27. token验证通过,nova-compute获得虚拟机持久化存储信息。
28. nova-compute根据instance的信息调用配置的虚拟化驱动来创建虚拟机。
 
openstack官方stein版本安装指导文档:https://docs.openstack.org/stein/install/
 
计算节点开启嵌套虚拟化
# cat /sys/module/kvm_intel/parameters/nested
N
没有KVM上启用嵌套虚拟化
cat >/etc/modprobe.d/kvm-nested.conf<<EOF
options kvm-intel nested=1
options kvm-intel enable_shadow_vmcs=1
options kvm-intel enable_apicv=1
options kvm-intel ept=1
EOF
 
# modprobe -r kvm_intel
# lsmod | grep kvm
# modprobe -a kvm_intel
# lsmod | grep kvm
kvm_intel             170086  0
kvm                   566340  1 kvm_intel
irqbypass              13503  1 kvm
# cat /sys/module/kvm_intel/parameters/nested
Y
 
OK, KVM上启用了嵌套虚拟化
 
第二章 OpenStack环境准备
[root@controller1 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
 
                      eth0                  eth1                    eth2
controller1   10.1.36.21       Trunk                  Trunk                 管理节点 
compute1   10.1.36.24       Trunk                  Trunk                 计算节点  
 
修改主机名和hosts文件
 
cat >>/etc/hosts<<EOF
10.1.36.21 controller1
10.1.36.22 controller2
10.1.36.23 controller3
10.1.36.24 compute1
10.1.36.25 compute2
10.1.36.26 compute3
10.1.36.27 compute4
EOF
 
管理节点
hostnamectl set-hostname controller1
计算节点
hostnamectl set-hostname compute1
 
基础软件包安装
基础软件包需要在所有的OpenStack节点上进行安装,包括控制节点和计算节点。
提前安装好常用软件
yum install -y vim net-tools wget lrzsz tree screen lsof tcpdump nmap bridge-utils
 
1.安装EPEL仓库
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
 
2.安装OpenStack仓库
OpenStack stein,目前CentOS7.6版本只支持stein、rocky、stein、train四个版本,我们选择次新版的stein版本
 
从stein版本后版本都是直接centos基础源extras里了,可以直接yum
# yum search openstack | grep release
centos-release-openstack-queens.noarch : OpenStack from the CentOS Cloud SIG
centos-release-openstack-rocky.noarch : OpenStack from the CentOS Cloud SIG repo
centos-release-openstack-stein.noarch : OpenStack from the CentOS Cloud SIG repo
centos-release-openstack-train.noarch : OpenStack from the CentOS Cloud SIG repo
 
# yum install centos-release-openstack-stein -y
 
3.安装OpenStack客户端
yum install -y python-openstackclient
4.安装openstack SELinux管理包
yum install -y openstack-selinux
 
5.时间同步
安装网络守时服务
Openstack节点之间必须时间同步,不然可能会导致创建云主机不成功。
# yum install chrony -y
# vim /etc/chrony.conf #修改NTP配置
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
 
# systemctl enable chronyd.service#设置NTP服务开机启动
# systemctl start chronyd.service#启动NTP对时服务
# chronyc sources#验证NTP对时服务
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? ControllerNode                0   6     0     -     +0ns[   +0ns] +/-    0ns
 
设置时区
timedatectl set-timezone Asia/Shanghai
 
---------------------------------------------------------------------------------------------------------------
第三章 MySQL数据库集群部署
 
我们的MySQL集群部署在openstack的控制节点10.1.36.21,10.1.36.22 ,10.1.36.23上,如果资源比较富裕可以把数据库单独部署在三个独立的服务器上。
 
直接添加mariadb的专用源
 
cat >/etc/yum.repos.d/mariadb.repo<<EOF
[mariadb]
name = MariaDB
gpgcheck=0
enable=1
EOF
 
注:官方yum源超级慢,可以把mariadb源替换为清华大学源
cat >/etc/yum.repos.d/mariadb.repo<<EOF
[mariadb]
name = MariaDB
baseurl=https://mirrors.tuna.tsinghua.edu.cn/mariadb/yum/10.1/centos7-amd64/
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
gpgcheck=0
enable=1
EOF
 
yum clean all
yum install  MariaDB-server python2-PyMySQL -y
 
安装的过程中会把MariaDB-client和galera一起安装好
================================================================================
Package            Arch       Version                        Repository   Size
================================================================================
Installing:
MariaDB-server     x86_64     10.1.40-1.el7.centos           mariadb      24 M
Installing for dependencies:
MariaDB-client     x86_64     10.1.40-1.el7.centos           mariadb      10 M
galera             x86_64     25.3.26-1.rhel7.el7.centos     mariadb     8.1 M
 
Transaction Summary
================================================================================
 
/etc/my.cnf的配置
[mysqld]
port = 3306
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_connections = 10000
wait_timeout = 600
symbolic-links=0
key_buffer_size = 64M
max_heap_table_size = 64M
tmp_table_size = 64M
innodb_buffer_pool_size = 4096M
max_allowed_packet = 1M
table_open_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 16M
thread_concurrency = 8
max_allowed_packet = 64M
wait_timeout=2880000
interactive_timeout = 2880000
default-storage-engine = innodb
innodb_autoinc_lock_mode = 2
collation-server = utf8_general_ci
character_set_server=utf8
skip-name-resolve
!includedir /etc/my.cnf.d/
[mariadb]
log-error=/var/log/mariadb/mariadb.log
 
mkdir -pv /var/log/mariadb/
在10.1.36.21服务器上运行(启动集群后这些密码的设置会同步到集群中的其他机器中)
启动数据库
systemctl start mariadb
 
单独设置密码,当然我们也可以在数据库初始化时设置密码
mysqladmin -u root password 04aea9de5f79
 
数据库初始化
# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
You already have a root password set, so you can safely answer 'n'.
Change the root password? [Y/n] n
... skipping.
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] n
... skipping.
By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
关闭所有数据库
systemctl stop mariadb
 
给三台测试的服务器都写上这样的配置,每台的配置相应的做一些调整
# cat /etc/my.cnf.d/galera.cnf
[galera]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=10.1.36.21
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name=openstack
wsrep_cluster_address="gcomm://10.1.36.21:4567,10.1.36.22:4567,10.1.36.23:4567"
wsrep_node_name=controller1
wsrep_node_address=10.1.36.21
wsrep_sst_method=rsync
wsrep_causal_reads=ON
wsrep_slave_threads=4
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
 
注:使用galera-4包安装的需要修改配置文件wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so
 
将此文件复制到mariadb-2、mariadb-3,注意要把 wsrep_node_name 和 wsrep_node_address 改成相应节点的 hostname 和 ip。示范如下
[galera]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=10.1.36.22
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name=openstack
wsrep_cluster_address="gcomm://10.1.36.21:4567,10.1.36.22:4567,10.1.36.23:4567"
wsrep_node_name=controller2
wsrep_node_address=10.1.36.22
wsrep_sst_method=rsync
wsrep_causal_reads=ON
wsrep_slave_threads=4
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
 
 
启动 MariaDB Galera Cluster 服务:
# /bin/galera_new_cluster
 
剩余两节点启动方式为:
systemctl start mariadb
systemctl enable  mariadb
查看集群状态:(集群服务使用了4567和3306端口))
[root@node1 ~]# netstat -tnlp | grep -e 4567 -e 3306
tcp        0      0 0.0.0.0:4567            0.0.0.0:*               LISTEN      17908/mysqld
tcp        0      0 10.1.36.21:3306       0.0.0.0:*               LISTEN      17908/mysqld
 
启动数据库服务后,登录mysql数据库库,通过wsrep_cluster_size参数来判断启动是否成功。
MariaDB [(none)]>  SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.00 sec)
 
注:如果你想不输入密码登陆mysql数据库,方法如下
[root@controller1 ~]# vim .my.cnf
[client]
host=localhost
user=root
password=04aea9de5f79
 
//查看Galera集群运行状态
MariaDB [(none)]> show status like '%wsrep%';
+------------------------------+-------------------------------------------------+
| Variable_name                | Value                                           |
+------------------------------+-------------------------------------------------+
| wsrep_apply_oooe             | 0.000000                                        |
| wsrep_apply_oool             | 0.000000                                        |
| wsrep_apply_window           | 0.000000                                        |
| wsrep_causal_reads           | 1                                               |
| wsrep_cert_deps_distance     | 0.000000                                        |
| wsrep_cert_index_size        | 0                                               |
| wsrep_cert_interval          | 0.000000                                        |
| wsrep_cluster_conf_id        | 3                                               |
| wsrep_cluster_size           | 3                                               |
| wsrep_cluster_state_uuid     | 172f16a7-90eb-11ea-9b3e-3f0cc8c75e00            |
| wsrep_cluster_status         | Primary                                         |
| wsrep_cluster_weight         | 3                                               |
| wsrep_commit_oooe            | 0.000000                                        |
| wsrep_commit_oool            | 0.000000                                        |
| wsrep_commit_window          | 0.000000                                        |
| wsrep_connected              | ON                                              |
| wsrep_desync_count           | 0                                               |
| wsrep_evs_delayed            |                                                 |
| wsrep_evs_evict_list         |                                                 |
| wsrep_evs_repl_latency       | 0/0/0/0/0                                       |
| wsrep_evs_state              | OPERATIONAL                                     |
| wsrep_flow_control_paused    | 0.000000                                        |
| wsrep_flow_control_paused_ns | 0                                               |
| wsrep_flow_control_recv      | 0                                               |
| wsrep_flow_control_sent      | 0                                               |
| wsrep_gcomm_uuid             | 172e0de4-90eb-11ea-b3cc-ba7cdfffe01f            |
| wsrep_incoming_addresses     | 10.1.36.21:3306,10.1.36.22:3306,10.1.36.23:3306 |
| wsrep_last_committed         | 0                                               |
| wsrep_local_bf_aborts        | 0                                               |
| wsrep_local_cached_downto    | 18446744073709551615                            |
| wsrep_local_cert_failures    | 0                                               |
| wsrep_local_commits          | 0                                               |
| wsrep_local_index            | 0                                               |
| wsrep_local_recv_queue       | 0                                               |
| wsrep_local_recv_queue_avg   | 0.100000                                        |
| wsrep_local_recv_queue_max   | 2                                               |
| wsrep_local_recv_queue_min   | 0                                               |
| wsrep_local_replays          | 0                                               |
| wsrep_local_send_queue       | 0                                               |
| wsrep_local_send_queue_avg   | 0.000000                                        |
| wsrep_local_send_queue_max   | 1                                               |
| wsrep_local_send_queue_min   | 0                                               |
| wsrep_local_state            | 4                                               |
| wsrep_local_state_comment    | Synced                                          |
| wsrep_local_state_uuid       | 172f16a7-90eb-11ea-9b3e-3f0cc8c75e00            |
| wsrep_open_connections       | 0                                               |
| wsrep_open_transactions      | 0                                               |
| wsrep_protocol_version       | 9                                               |
| wsrep_provider_name          | Galera                                          |
| wsrep_provider_vendor        | Codership Oy <info@codership.com>               |
| wsrep_provider_version       | 25.3.26(r3857)                                  |
| wsrep_ready                  | ON                                              |
| wsrep_received               | 10                                              |
| wsrep_received_bytes         | 762                                             |
| wsrep_repl_data_bytes        | 0                                               |
| wsrep_repl_keys              | 0                                               |
| wsrep_repl_keys_bytes        | 0                                               |
| wsrep_repl_other_bytes       | 0                                               |
| wsrep_replicated             | 0                                               |
| wsrep_replicated_bytes       | 0                                               |
| wsrep_thread_count           | 2                                               |
+------------------------------+-------------------------------------------------+
61 rows in set (0.01 sec)
 
配置数据库
适用于RHEL和CentOS的SQL数据库安装的官方文档:https://docs.openstack.org/install-guide/environment-sql-database-rdo.html
 
#Glance数据库
mysql -u root -e "CREATE DATABASE glance;"
mysql -u root -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '04aea9de5f79';"
#Nova数据库
mysql -u root -e "CREATE DATABASE nova;"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '04aea9de5f79';"
 
mysql -u root -e "CREATE DATABASE nova_api; "
mysql -u root -e " GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '04aea9de5f79'; "
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '04aea9de5f79';"
 
mysql -u root -e "CREATE DATABASE nova_cell0;"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost'  IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%'  IDENTIFIED BY '04aea9de5f79';"
 
mysql -u root -e "CREATE DATABASE placement;"
mysql -u root -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost'   IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%'    IDENTIFIED BY '04aea9de5f79';"
#Neutron 数据库
mysql -u root -e "CREATE DATABASE neutron;"
mysql -u root -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '04aea9de5f79';"
#Cinder数据库
mysql -u root -e "CREATE DATABASE cinder;"
mysql -u root -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '04aea9de5f79';"
 
#keystone数据库
mysql -u root -e "CREATE DATABASE keystone;"
mysql -u root -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '04aea9de5f79';"
 
注:N版以后nova多了一个nova_cell0的数据库
 
如果你的数据没有设置root密码自己执行下面的命令创建所有数据库
mysql -u root  -e "CREATE DATABASE glance;"
mysql -u root  -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "CREATE DATABASE nova;"
mysql -u root  -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "CREATE DATABASE nova_api; "
mysql -u root  -e " GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '04aea9de5f79'; "
mysql -u root  -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "CREATE DATABASE nova_cell0;"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost'  IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%'  IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "CREATE DATABASE placement;"
mysql -u root -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost'   IDENTIFIED BY '04aea9de5f79';"
mysql -u root -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%'    IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "CREATE DATABASE neutron;"
mysql -u root  -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "CREATE DATABASE cinder;"
mysql -u root  -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 04aea9de5f79';"
mysql -u root  -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "CREATE DATABASE keystone;"
mysql -u root  -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '04aea9de5f79';"
mysql -u root  -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '04aea9de5f79';"
 
查看数据库和授权是否成功,以keystone数据库为示范
[root@controller1 ~]# mysql -u root -e "SHOW GRANTS FOR keystone@'%';"
+---------------------------------------------------------------------------------------------------------+
| Grants for keystone@%                                                                                   |
+---------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'keystone'@'%' IDENTIFIED BY PASSWORD '*83B3E60006625DF16A9138A99349CF7D4DF0235B' |
| GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'%'                                                  |
+---------------------------------------------------------------------------------------------------------+
 
第四章 消息代理RabbitMQ(RabbitMQ集群的部署)
 
安装RabbitMQ
所有rabbitmq集群服务器安装rabbitm并启动服务:
# yum install -y rabbitmq-server
启动消息队列服务并将其配置为随系统启动:
 
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
 
查看插件安装情况
rabbitmq-plugins list
开启管理服务
rabbitmq-plugins enable rabbitmq_management
 
查看各自的群集状态
rabbitmqctl cluster_status
Cluster status of node rabbit@controller1
[{nodes,[{disc,[rabbit@controller1]}]},
{running_nodes,[rabbit@controller1]},
{cluster_name,<<"rabbit@controller1">>},
{partitions,[]},
{alarms,[{rabbit@controller1,[]}]}]
 
关闭所有rabbitmq服务器上的rabbitmq服务,复制配置文件后开启
systemctl stop rabbitmq-server.service
//隐藏文件,3个控制节点配置一样
scp /var/lib/rabbitmq/.erlang.cookie root@10.1.36.22:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie root@10.1.36.23:/var/lib/rabbitmq/.erlang.cookie
 
启动所有rabbitmq服务器上的rabbitmq服务
systemctl start rabbitmq-server.service
++++++以下操作在内存节点上++++++
将controller2和controller3内存节点服务器分别加入到controller1磁盘节点
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster --ram rabbit@controller1 #加入到磁盘节点
rabbitmqctl start_app
#ram:以内存节点形式添加,不加磁盘形式
$ rabbitmqctl cluster_status    #验证集群状态
Cluster status of node rabbit@controller1
[{nodes,[{disc,[rabbit@controller1]},
         {ram,[rabbit@controller3,rabbit@controller2]}]},
{running_nodes,[rabbit@controller3,rabbit@controller2,rabbit@controller1]},
{cluster_name,<<"rabbit@controller1">>},
{partitions,[]},
{alarms,[{rabbit@controller3,[]},
          {rabbit@controller2,[]},
          {rabbit@controller1,[]}]}]
镜像的配置是通过 policy 策略的方式
rabbitmqctl set_policy  ha-all "#" '{"ha-mode":"all"}'
在controller1(磁盘节点)节点上做添加用户和添加管理插件的操作
 
添加 openstack 用户:
[root@controller1 ~]# rabbitmqctl add_user openstack 04aea9de5f79
 
注:在执行此操作时确保主机名和/etc/hosts里显示的一致,要不会操作失败并报错
用合适的密码替换 RABBIT_DBPASS。
 
给``openstack``用户配置写和读权限:
 
[root@controller1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
 
---------------------------------------------------------------------------------------
再次查看监听的端口:web管理端口:15672
# netstat -lntup | grep 5672
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      4900/beam.smp       
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      4900/beam.smp       
tcp6       0      0 :::5672                 :::*                    LISTEN      4900/beam.smp
 
---------------------------------------------------------------------------------------
 
web端打开10.1.36.28:15672        用户名 guest      密码 guest
登录进去之后:
Admin------->复制administrator------->点击openstack------>Update this user-------->
Tags:粘帖administrator--------->密码都设置为04aea9de5f79-------->logout
然后在登陆:用户名 openstack  密码  04aea9de5f79
 
安装Memcached
每个控制节点都需要安装Memcached
服务的身份服务身份验证机制使用Memcached来缓存令牌。memcached服务通常在控制器节点上运行。对于生产部署,我们建议启用防火墙,身份验证和加密的组合以保护它。
安装和配置组件
1. 安装包:
[root@controller1 ~]# yum install -y memcached python-memcached
2. 编辑/etc/sysconfig/memcached文件并完成以下操作:
    * 配置服务以使用控制器节点的管理IP地址。这是为了通过管理网络启用其他节点的访问:
OPTIONS="-l 0.0.0.0,::1"
 
其他配置修改:
# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="4096"
CACHESIZE="1024"
OPTIONS="-l 0.0.0.0,::1"
 
完成安装
* 启动Memcached服务并将其配置为在系统引导时启动:
systemctl enable memcached.service
systemctl start memcached.service
 
第五章 OpenStack验证服务KeyStone
---------------------------------------------------------------------------------------
Keystone作用:用户与认证:用户权限与用户行为跟踪:
              服务目录:提供一个服务目录,包括所有服务项与相关Api的端点
User:用户   Tenant:租户 项目    Token:令牌   Role:角色   Service:服务   Endpoint:端点
----------------------------------------------------------------------------------------
1.安装keystone
[root@controller1 ~]# yum install -y openstack-keystone httpd mod_wsgi
 
[root@controller1 ~]# openssl rand -hex 10        ----生成随机码
dc46816a3e103ec2a700
 
编辑文件 /etc/keystone/keystone.conf 并完成如下动作:
 
在``[DEFAULT]``部分,定义初始管理令牌的值:
 
[DEFAULT]
...
admin_token = ADMIN_TOKEN
使用前面步骤生成的随机数替换``ADMIN_TOKEN`` 值。
 
在 [database] 部分,配置数据库访问:
 
[database]
...
将``KEYSTONE_DBPASS``替换为你为数据库选择的密码。
 
在``[token]``部分,配置Fernet UUID令牌的提供者。
 
[token]
...
provider = fernet
初始化身份认证服务的数据库:
 
完成后/etc/keystone/keystone.conf的配置
[root@controller1 ~]# grep -vn '^$\|^#'  /etc/keystone/keystone.conf  
[DEFAULT]
admin_token = dc46816a3e103ec2a700
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:04aea9de5f79@10.1.36.28/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[federation]
[fernet_tokens]
[healthcheck]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[memcache]
servers=10.1.36.28:11211
[oauth1]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[profiler]
[resource]
[revoke]
[role]
[saml]
[security_compliance]
[shadow_users]
[signing]
[token]
provider = fernet
driver = memcache
[tokenless_auth]
[trust]
 
注:配置中的ip地址,没有特殊需求尽量配置为VIP地址,这样做是基于集群高可用的目的,后面的各种配置出现10.1.36.28这个IP就不在累述了。
-----------------------------------------------------------------------------------------------
 
同步数据库:注意权限,所以要用su -s 切换到keystone用户下执行:
 
[root@controller1 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
[root@controller1 ~]# tail /var/log/keystone/keystone.log #查看安装是否有错误
2020-05-12 09:54:25.345 31846 INFO migrate.versioning.api [-] 56 -> 57...
2020-05-12 09:54:25.356 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.357 31846 INFO migrate.versioning.api [-] 57 -> 58...
2020-05-12 09:54:25.368 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.368 31846 INFO migrate.versioning.api [-] 58 -> 59...
2020-05-12 09:54:25.380 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.381 31846 INFO migrate.versioning.api [-] 59 -> 60...
2020-05-12 09:54:25.392 31846 INFO migrate.versioning.api [-] done
2020-05-12 09:54:25.393 31846 INFO migrate.versioning.api [-] 60 -> 61...
2020-05-12 09:54:25.404 31846 INFO migrate.versioning.api [-] done
[root@controller1 ~]# chown -R keystone:keystone /var/log/keystone/keystone.log  这个可以选
[root@controller1 keystone]# mysql  -ukeystone -p04aea9de5f79 keystone -e "use keystone;show tables;"
+-----------------------------+
| Tables_in_keystone          |
+-----------------------------+
| access_token                |
| application_credential      |
| application_credential_role |
| assignment                  |
| config_register             |
| consumer                    |
| credential                  |
| endpoint                    |
| endpoint_group              |
| federated_user              |
| federation_protocol         |
| group                       |
| id_mapping                  |
| identity_provider           |
| idp_remote_ids              |
| implied_role                |
| limit                       |
| local_user                  |
| mapping                     |
| migrate_version             |
| nonlocal_user               |
| password                    |
| policy                      |
| policy_association          |
| project                     |
| project_endpoint            |
| project_endpoint_group      |
| project_tag                 |
| region                      |
| registered_limit            |
| request_token               |
| revocation_event            |
| role                        |
| sensitive_config            |
| service                     |
| service_provider            |
| system_assignment           |
| token                       |
| trust                       |
| trust_role                  |
| user                        |
| user_group_membership       |
| user_option                 |
| whitelisted_config          |
+-----------------------------+
表已创建完毕,OK
 
注:如果表没有创建成功,一定要查看日志,一般是配置中的数据库没有没有写对导致表无法创建,日志在 /var/log/keystone/keystone.log这个文件中。
 
初始化Fernet keys:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
 
注意:初始化Fernet keys是控制节点都需要的,同步keystone数据库在顺便一台控制节点上跑一下就好
 
看到下面的的文件夹fernet-keys和credential-keys就说明上面初始化命令已经完成
[root@controller1 ~]# ls -lh /etc/keystone/
total 136K
drwx------. 2 keystone keystone   24 Feb 28 14:16 credential-keys
-rw-r-----. 1 root     keystone 2.3K Nov  1 06:24 default_catalog.templates
drwx------. 2 keystone keystone   24 Feb 28 14:16 fernet-keys
-rw-r-----. 1 root     keystone 114K Feb 28 14:14 keystone.conf
-rw-r-----. 1 root     keystone 2.5K Nov  1 06:24 keystone-paste.ini
-rw-r-----. 1 root     keystone 1.1K Nov  1 06:24 logging.conf
-rw-r-----. 1 root     keystone    3 Nov  1 17:21 policy.json
-rw-r-----. 1 keystone keystone  665 Nov  1 06:24 sso_callback_template.html
 
拷贝配置到其他二台控制节点(keystone,glance,nova,neturon等等的配置三个控制节点一定要保持一样,后面不再详写)
打包压缩controller1的/etc/keystone/目录,传送到其他的控制节点
cd /etc/keystone
tar czvf keystone-controller1.tar.gz ./*
scp keystone-controller1.tar.gz root@10.1.36.22:/etc/keystone/
scp keystone-controller1.tar.gz root@10.1.36.23:/etc/keystone/
----------------------------------------------------------------------------------
 
配置 Apache HTTP 服务器
 
编辑``/etc/httpd/conf/httpd.conf`` 文件,配置``ServerName`` 选项为控制节点:
Listen 0.0.0.0:80
ServerName localhost:80
 
必须要配置httpd的ServerName,否则keystone服务不能起来
 
拷贝配置到其他控制节点
scp /etc/httpd/conf/httpd.conf root@10.1.36.22:/etc/httpd/conf/
scp /etc/httpd/conf/httpd.conf root@10.1.36.23:/etc/httpd/conf/
 
下面的内容为文件 /etc/httpd/conf.d/wsgi-keystone.conf的内容。并用apache来代理它:5000  正常的api来访问  35357  管理访问的端口
[root@controller1 ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@controller1 ~]# vim /etc/httpd/conf.d/wsgi-keystone.conf
[root@controller1 ~]# cat /etc/httpd/conf.d/wsgi-keystone.conf
Listen 0.0.0.0:5000
 
 
<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LimitRequestBody 114688
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    ErrorLog /var/log/httpd/keystone.log
    CustomLog /var/log/httpd/keystone_access.log combined
 
 
    <Directory /usr/bin>
        <IfVersion >= 2.4>
            Require all granted
        </IfVersion>
        <IfVersion < 2.4>
            Order allow,deny
            Allow from all
        </IfVersion>
    </Directory>
</VirtualHost>
 
 
Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
    SetHandler wsgi-script
    Options +ExecCGI
 
 
    WSGIProcessGroup keystone-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
</Location>
 
拷贝配置到其他控制节点
 
scp /etc/httpd/conf.d/wsgi-keystone.conf root@10.1.36.22:/etc/httpd/conf.d/
scp /etc/httpd/conf.d/wsgi-keystone.conf root@10.1.36.23:/etc/httpd/conf.d/
---------------------------------------------------------------------------------------------------
启动 Apache HTTP 服务并配置其随系统启动:
[root@controller1 ~]# systemctl enable httpd.service && systemctl start httpd.service
---------------------------------------------------------------------------------------------------
查看端口: 
[root@controller1 ~]# netstat -lntup|grep httpd
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      10038/httpd         
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      10038/httpd  
 
查看日志/var/log/keystone/keystone.log
没有ERROR说明keystone启动正常
[root@controller1 ~]#  tail -n 20 /var/log/keystone/keystone.log
2020-05-08 17:10:24.056 8156 INFO migrate.versioning.api [-] 43 -> 44...
2020-05-08 17:10:24.069 8156 INFO migrate.versioning.api [-] done
2020-05-08 17:12:33.635 8258 INFO keystone.common.token_utils [-] key_repository does not appear to exist; attempting to create it
2020-05-08 17:12:33.635 8258 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
2020-05-08 17:12:33.636 8258 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
2020-05-08 17:12:33.636 8258 INFO keystone.common.token_utils [-] Starting key rotation with 1 key files: ['/etc/keystone/fernet-keys/0']
2020-05-08 17:12:33.636 8258 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Current primary key is: 0
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Next primary key will be: 1
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Promoted key 0 to be the primary: 1
2020-05-08 17:12:33.637 8258 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
2020-05-08 17:12:41.854 8271 INFO keystone.common.token_utils [-] key_repository does not appear to exist; attempting to create it
2020-05-08 17:12:41.855 8271 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
2020-05-08 17:12:41.855 8271 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/credential-keys/0
2020-05-08 17:12:41.855 8271 INFO keystone.common.token_utils [-] Starting key rotation with 1 key files: ['/etc/keystone/credential-keys/0']
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Current primary key is: 0
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Next primary key will be: 1
2020-05-08 17:12:41.856 8271 INFO keystone.common.token_utils [-] Promoted key 0 to be the primary: 1
2020-05-08 17:12:41.857 8271 INFO keystone.common.token_utils [-] Become a valid new key: /etc/keystone/credential-keys/0
 
进行后面操作必须要保障keystone的api和管理访问端口正常,keystone的api和管理访问端口是否正常可以通过如下访问web页面的方式完成
 
---------------------------------------------------------------------------------------------------
 
创建验证用户及地址版本信息:
[root@controller1 ~]# grep -n '^admin_token' /etc/keystone/keystone.conf
18:admin_token = dc46816a3e103ec2a700
 
[root@controller1 ~]# export OS_TOKEN=dc46816a3e103ec2a700    -------设置环境变量
[root@controller1 ~]# export OS_URL=http://10.1.36.28:5000/v3
[root@controller1 ~]# export OS_IDENTITY_API_VERSION=3
[root@controller1 ~]# env|grep ^OS #查看是否设置成功
OS_IDENTITY_API_VERSION=3
OS_TOKEN=dc46816a3e103ec2a700
 
[root@controller1 ~]# openstack domain list #验证一下,没有输出是对的,因为我们还没有创建,如果出现错误,请查看日志解决
 
创建域、项目、用户和角色
 
身份认证服务为每个OpenStack服务提供认证服务。认证服务使用 T domains, projects (tenants), :term:`users<user>`和 :term:`roles<role>`的组合。
 
 创建默认域
openstack domain create --description "Default Domain" default
 
在你的环境中,为进行管理操作,创建管理的项目、用户和角色:
 
创建 admin 项目:
 
openstack project create --domain default --description "Admin Project" admin
 
注解
 
OpenStack 是动态生成 ID 的,因此您看到的输出会与示例中的命令行输出不相同。
 
创建 admin 用户:
 
openstack user create --domain default --password-prompt admin
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | ae9d80f6b8f94403ac1ddf0ff2cad01e |
| enabled             | True                             |
| id                  | efe2970c7ab74c67a4aced146cee3fb0 |
| name                | admin                            |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
密码设置为了04aea9de5f79
创建 admin 角色:
 
openstack role create admin
 
添加``admin`` 角色到 admin 项目和用户上,并授权admin的角色:
 
openstack role add --project admin --user admin admin
注解
 
这个命令执行后没有输出。
注解
你创建的任何角色必须映射到每个OpenStack服务配置文件目录(/etc/keystone/)下的``policy.json`` 文件中。默认策略是给予“admin“角色大部分服务的管理访问权限。更多信息,参考 ``Operations Guide - Managing Projects and Users <http://docs.openstack.org/ops-guide/opsrojects-users.html>`__.
 
扩展:最好把注册时已经添加的admin用户删除,因为你不知道密码...
 
 
创建``demo`` 项目:
 
openstack project create --domain default --description "Demo Project" demo
注解
 
当为这个项目创建额外用户时,不要重复这一步。
 
创建``demo`` 用户:
 
openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | ae9d80f6b8f94403ac1ddf0ff2cad01e |
| enabled             | True                             |
| id                  | e40023738a1e40e8b3fc6fd3bee7dae7 |
| name                | demo                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
密码设置为了04aea9de5f79
创建 user 角色:
 
openstack role create user
 
添加 user``角色到 ``demo 项目和用户:
 
openstack role add --project demo --user demo user
 
本指南使用一个你添加到你的环境中每个服务包含独有用户的service 项目。创建``service``项目:
openstack project create --domain default --description "Service Project" service
 
快速粘贴命令行
export OS_TOKEN=dc46816a3e103ec2a700
export OS_URL=http://10.1.36.28:5000/v3
export OS_IDENTITY_API_VERSION=3
openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default --password-prompt admin
 
openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password-prompt demo
 
openstack role create user
openstack role add --project demo --user demo user
openstack project create --domain default --description "Service Project" service
 
 
--------------------------------------------------------------------------------------------------
 
查看创建的用户及角色:
 
[root@controller1 ~]# openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 2b3676307efa44759e21b0ac0b84dd7d | admin |
| 9813446ed72a4d548425ab5567f7ac42 | demo  |
+----------------------------------+-------+
[root@controller1 ~]# openstack role list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 5486767f05c74584b327b3ec8b808966 | user  |
| a4e5cf4725574da5b01d6a351026a66b | admin |
+----------------------------------+-------+
[root@controller1 ~]# openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 1633afbf896341178c61d563a461cd47 | service |
| 445adc5d8a7e49a693530192fb8fb4c2 | admin   |
| 7a42622b277a48baaa80a38571f0c5ac | demo    |
+----------------------------------+---------+
 
-------------------------------------------------------------------------------------------------
 
创建glance用户:
 
openstack user create --domain default --password=04aea9de5f79 glance
将此用户加入到项目里面并给它赋予admin的权限:
openstack role add --project service --user glance admin
 
创建nova用户:
openstack user create --domain default --password=04aea9de5f79 nova
openstack role add --project service --user nova admin
 
创建nova[placement]用户:
openstack user create --domain default --password=04aea9de5f79 placement
openstack role add --project service --user placement admin
 
创建neutron用户:
openstack user create --domain default --password=04aea9de5f79 neutron
openstack role add --project service --user neutron admin
 
引导身份服务:
keystone-manage bootstrap --bootstrap-password admin \
--bootstrap-admin-url http://10.1.36.28:5000/v3/ \
--bootstrap-internal-url http://10.1.36.28:5000/v3/ \
--bootstrap-public-url http://10.1.36.28:5000/v3/ \
--bootstrap-region-id RegionOne
这个步骤有可能不需要
 
如果这步出错,如你写错了域名或端口等,会无法创建下面的domain, projects, users and roles, 重新配置是不能解决的,它不会覆盖前面的配置,解决办法是如下:
MariaDB [keystone]> select * from endpoint;
+----------------------------------+--------------------+-----------+----------------------------------+----------------------------+-------+---------+-----------+
| id                               | legacy_endpoint_id | interface | service_id                       | url                        | extra | enabled | region_id |
+----------------------------------+--------------------+-----------+----------------------------------+----------------------------+-------+---------+-----------+
| 94f2003fb6f34c50828177fb5bfa0724 | NULL               | public    | d11569bcab004ad3b0b2de12b5e363c9 | http://10.1.36.28:9292     | {}    |       1 | RegionOne |
| b7dc83fbd2f24f48a26e6fd392bcda27 | NULL               | internal  | a698441d64a94ed888fc97087428af74 | http://10.1.36.28:5000/v3  | {}    |       1 | RegionOne |
| b86828a2a2c44f53abd1d67176b3cadc | NULL               | public    | a698441d64a94ed888fc97087428af74 | http://10.1.36.28:5000/v3  | {}    |       1 | RegionOne |
| c21ec48a677d44fab2422ba77d53ca94 | NULL               | internal  | d11569bcab004ad3b0b2de12b5e363c9 | http://10.1.36.28:9292     | {}    |       1 | RegionOne |
| ecc28f07128c4723bc5f5363fbc385f3 | NULL               | admin     | a698441d64a94ed888fc97087428af74 | http://10.1.36.28:35357/v3 | {}    |       1 | RegionOne |
| f328b0b8a9b942ce9dffd06b6eaa740a | NULL               | admin     | d11569bcab004ad3b0b2de12b5e363c9 | http://10.1.36.28:9292     | {}    |       1 | RegionOne |
+----------------------------------+--------------------+-----------+----------------------------------+----------------------------+-------+---------+-----------+
6 rows in set (0.00 sec)
 
 
MariaDB [keystone]> delete from endpoint where url like '%36.28%';
Query OK, 6 rows affected (0.01 sec)
 
 
MariaDB [keystone]>  select * from endpoint;
Empty set (0.00 sec)
 
处理完成后,重新配置上面的步骤
 
创建服务实体和API端点
在你的Openstack环境中,认证服务管理服务目录。服务使用这个目录来决定您的环境中可用的服务。
 
创建服务实体和身份认证服务:
 
[root@controller1 ~]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identity               |
| enabled     | True                             |
| id          | ab1131690e2a4787b3a4282c07327250 |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+
注解
 
OpenStack 是动态生成 ID 的,因此您看到的输出会与示例中的命令行输出不相同。
 
身份认证服务管理了一个与您环境相关的 API 端点的目录。服务使用这个目录来决定如何与您环境中的其他服务进行通信。
 
OpenStack使用三个API端点变种代表每种服务:admin,internal和public。默认情况下,管理API端点允许修改用户和租户而公共和内部APIs不允许这些操作。在生产环境中,处于安全原因,变种为了服务不同类型的用户可能驻留在单独的网络上。对实例而言,公共API网络为了让顾客管理他们自己的云在互联网上是可见的。管理API网络在管理云基础设施的组织中操作也是有所限制的。内部API网络可能会被限制在包含OpenStack服务的主机上。此外,OpenStack支持可伸缩性的多区域。为了简单起见,本指南为所有端点变种和默认``RegionOne``区域都使用管理网络。
 
创建认证服务的 API 端点:
 
openstack endpoint create --region RegionOne identity public http://10.1.36.28:5000/v3
openstack endpoint create --region RegionOne identity internal http://10.1.36.28:5000/v3
openstack endpoint create --region RegionOne identity admin http://10.1.36.28:5000/v3
 
验证操作
 
在安装其他服务之前确认身份认证服务的操作。
注解
在控制节点上执行这些命令。
 
[root@controller1 ~]# unset OS_TOKEN OS_URL
[root@controller1 ~]# openstack --os-auth-url http://10.1.36.28:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
Password:
+------------+--------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                            |
+------------+--------------------------------------------------------------------------------------------------+
| expires    | 2020-05-08T12:25:45+0000                                                                         |
| id         | gAAAAABetUG5qEjRd4eIeiIfkTxVWrtMFQ_M7bvZ-GFGsguCOjeOs9GFgJJtPhWcgLOmDrYpnzO44nY5E-               |
|            | _H3KleSFOg9vnEqVb_ljbFe1dJ5mYXCcoLKaFZL-                                                         |
|            | JlM6g7_gdKtNsqGANNzm3jf_rB42Yt2FG9MMbr9iL7dPgjI18MldQP2vrD4gU                                    |
| project_id | 445adc5d8a7e49a693530192fb8fb4c2                                                                 |
| user_id    | 2b3676307efa44759e21b0ac0b84dd7d                                                                 |
+------------+--------------------------------------------------------------------------------------------------+
 
到此处说明keystone已经成功了
[root@controller1 ~]# openstack --os-auth-url http://10.1.36.28:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue
Password:
+------------+--------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                            |
+------------+--------------------------------------------------------------------------------------------------+
| expires    | 2020-05-08T12:26:45+0000                                                                         |
| id         | gAAAAABetUH1qGNgacWyT76IQRNCcmRGFxJ-                                                             |
|            | Fji2Vl23eBtqpppIwFxRqAqXWJH23V4jD7IkhBpTVu5bIPUhEgq6Tof2HmBN3dAlDbohKI1vEyKRJw9QUDZB9_-          |
|            | 31sO_k96GcIOVrUD_OcEGhjcSsWUnylGMVIQsYCBwiIn1dyl1H_A0oxSwTsI                                     |
| project_id | 7a42622b277a48baaa80a38571f0c5ac                                                                 |
| user_id    | 9813446ed72a4d548425ab5567f7ac42                                                                 |
+------------+--------------------------------------------------------------------------------------------------+
 
 
创建 OpenStack 客户端环境脚本
创建 admin 和 ``demo``项目和用户创建客户端环境变量脚本
[root@controller1 ~]# vim admin-openstack.sh
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=04aea9de5f79
export OS_AUTH_URL=http://10.1.36.28:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
 
[root@controller1 ~]# source admin-openstack.sh
 
注1:如果我想修改用户admin的密码,可以使用这个命令openstack user password set --password 04aea9de5f79来修改当前用户的密码为04aea9de5f79
注2:如果你要更改不同用户的密码,可以使用这个命令,已更换admin用户密码为例:openstack user set --password 04aea9de5f79 admin
 
[root@controller1 ~]# openstack token issue
+------------+--------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                            |
+------------+--------------------------------------------------------------------------------------------------+
| expires    | 2020-05-08T12:25:04+0000                                                                         |
| id         | gAAAAABetUGQgF-ERb2G_km7emcwzZszP3Cd8RYCN38RMkY4lyom0P2AqK6o4MzUoxwRHvn_lHq0wHu_42RicpXRRiZ4lDG1 |
|            | fFB0ecLZW9Q6dAP9OUmQvZkoDv3IybNcjAStw6vzu128syVEW_BjgVrK_LuCl5ZVgk5Z8wEY_SwfozHsnSA6JWA          |
| project_id | 445adc5d8a7e49a693530192fb8fb4c2                                                                 |
| user_id    | 2b3676307efa44759e21b0ac0b84dd7d                                                                 |
+------------+--------------------------------------------------------------------------------------------------+
[root@controller1 ~]# vim demo-openstack.sh           
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=04aea9de5f79
export OS_AUTH_URL=http://10.1.36.28:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
 
[root@controller1 ~]# source demo-openstack.sh     
[root@controller1 ~]# openstack token issue   --fit-width
+------------+--------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                            |
+------------+--------------------------------------------------------------------------------------------------+
| expires    | 2020-05-08T12:24:25+0000                                                                         |
| id         | gAAAAABetUFpzlG4u8ur9Exxr6XOtSx3ms9KUcoMkIR-GC8pp27gN530Ytj5bdUP99ETep_NstODWHs1YvVihGH3HDnDmq-  |
|            | iE45sdGdfU-Ic603f4w-JQjd8mtSeJLDIFVUDe4nbW1lA_OukWKhYl9DerU72sV0h_5sqmMW-Qi1-VUQIsd4ftOQ         |
| project_id | 7a42622b277a48baaa80a38571f0c5ac                                                                 |
| user_id    | 9813446ed72a4d548425ab5567f7ac42                                                                 |
+------------+--------------------------------------------------------------------------------------------------+
 
 
第四章 OpenStack镜像服务Glance
 
glance主要由三个部分组成:glance-api、glance-registry以及image store
glance-api:接受云系统镜像的创建、删除、读取请求
glance-registry:云系统的镜像注册服务
 
1.先决条件
glance服务创建:
source admin-openstack.sh
openstack service create --name glance --description "OpenStack Image service" image
 
创建镜像服务的 API 端点:
openstack endpoint create --region RegionOne   image public http://10.1.36.28:9292
openstack endpoint create --region RegionOne   image internal http://10.1.36.28:9292
openstack endpoint create --region RegionOne   image admin http://10.1.36.28:9292
 
[root@controller1 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                       |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
| 0eb8a8875c17452eb1a32053fafa95c8 | RegionOne | keystone     | identity     | True    | public    | http://10.1.36.28:5000/v3 |
| 2d172275ea58402bbfd7bf58b2c00260 | RegionOne | glance       | image        | True    | public    | http://10.1.36.28:9292    |
| 45f475af73b84dd092da35e3a4844234 | RegionOne | glance       | image        | True    | internal  | http://10.1.36.28:9292    |
| 8584b619de5d42259ad50e48b50ae6ae | RegionOne | keystone     | identity     | True    | internal  | http://10.1.36.28:5000/v3 |
| c7b19ea074d8483da8ee74a784ac579c | RegionOne | glance       | image        | True    | admin     | http://10.1.36.28:9292    |
| dc5d3a56208b4453a1d6650bf2c20f68 | RegionOne | keystone     | identity     | True    | admin     | http://10.1.36.28:5000/v3 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
 
2.安装和配置组件
glance的安装:
 
# yum install -y openstack-glance python-glance python-glanceclient
 
编辑文件 /etc/glance/glance-api.conf 并完成如下动作:
在 [database] 部分,配置数据库访问:
[database]
...
 
 
配置keystone与glance-api.conf的链接:
 
编辑/etc/glance/glance-api.conf文件 [keystone_authtoken] 和 [paste_deploy] 部分,配置认证服务访问:
 
[keystone_authtoken]
www_authenticate_uri = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 04aea9de5f79
 
[paste_deploy]
flavor = keystone
 
注:N版后keystone认证版本升级,注意配置时相应的提高版本配置,否则会出现openstack image list 报http 500的错误,后面的keystone认证版本都要改,但后面不在提示。
下面是报错示范
[root@controller1 ~]# openstack image list
Internal Server Error (HTTP 500)
 
# 打开copy-on-write功能
[DEFAULT]
show_image_direct_url = True
 
在 [glance_store] 部分, 变更默认使用的本地文件存储为ceph rbd存储:
 
[glance_store]
...
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
 
 
注:使用默认的本地文件存储配置如下
[glance_store]
enabled_backends = file,http
default_backend = file
filesystem_store_datadir = /var/lib/glance/images/
 
查看/etc/glance/glance-api.conf是否和下面一样
 
# grep -v '^#\|^$' /etc/glance/glance-api.conf
[DEFAULT]
debug = True
log_file = /var/log/glance/glance-api.log
use_forwarded_for = true
bind_port = 9292
workers = 5
show_multiple_locations = True
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
[cinder]
[cors]
[database]
connection = mysql+pymysql://glance:04aea9de5f79@10.1.36.28:3306/glance
[file]
[glance.store.http.store]
[glance.store.rbd.store]
[glance.store.sheepdog.store]
[glance.store.swift.store]
[glance.store.vmware_datastore.store]
[glance_store]
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
[image_format]
[keystone_authtoken]
www_authenticate_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 04aea9de5f79
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
 
 
注:确保/etc/ceph/ceph.conf和 /etc/ceph/ceph.client.glance.keyring文件存在并有glance访问的权限
[root@controller1 ~]# ls -lh /etc/ceph/
total 16K
-rw-r--r-- 1 glance glance   64 May 12 09:05 ceph.client.cinder.keyring
-rw-r----- 1 glance glance   64 May 12 09:03 ceph.client.glance.keyring
-rw-r--r-- 1 glance glance 1.5K May 12 13:45 ceph.conf
并且ceph.conf文件中有注明client.glance的密钥文件存放位置
# cat /etc/ceph/ceph.conf
[global]
fsid = 3948cba4-b0fa-4e61-84f5-3cec08dd5859
mon_initial_members = ceph-host-01, ceph-host-02, ceph-host-03
mon_host = 10.1.36.11,10.1.36.12,10.1.36.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
 
mon clock drift allowed = 2
mon clock drift warn backoff = 30
 
public_network = 10.1.36.0/24
cluster_network = 192.168.36.0/24
 
max_open_files = 131072
mon_pg_warn_max_per_osd = 1000
osd pool default pg num = 256
osd pool default pgp num = 256
osd pool default size = 2
osd pool default min size = 1
 
mon_osd_full_ratio = .90
mon_osd_nearfull_ratio = .80
osd_deep_scrub_randomize_ratio = 0.01
 
[mon]
mon_allow_pool_delete = true
mon_osd_down_out_interval = 600
mon_osd_min_down_reporters = 3
[mgr]
mgr modules = dashboard
[osd]
osd_journal_size = 20480
osd_max_write_size = 1024
osd mkfs type = xfs
osd_recovery_op_priority = 1
osd_recovery_max_active = 1
osd_recovery_max_single_start = 1
osd_recovery_threads = 1
osd_recovery_max_chunk = 1048576
osd_max_backfills = 1
osd_scrub_begin_hour = 22
osd_scrub_end_hour = 7
osd_recovery_sleep = 0
 
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true
rbd_concurrent_management_ops = 10
rbd_cache_size = 67108864
rbd_cache_max_dirty = 50331648
rbd_cache_target_dirty = 33554432
rbd_cache_max_dirty_age = 2
rbd_default_format = 2
 
[client.glance]
keyring = /etc/ceph/ceph.client.glance.keyring
[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring
 
同步数据库:
# su -s /bin/sh -c "glance-manage db_sync" glance
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1336: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
  expire_on_commit=expire_on_commit, _conf=conf)
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
  FO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> liberty, liberty initial
INFO  [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table
INFO  [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO  [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
INFO  [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
INFO  [alembic.runtime.migration] Running upgrade pike_expand01 -> stein_expand01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: stein_expand01, current revision(s): stein_expand01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Database migration is up to date. No migration needed.
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
INFO  [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
INFO  [alembic.runtime.migration] Running upgrade pike_contract01 -> stein_contract01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: stein_contract01, current revision(s): stein_contract01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.
 
检查数据库是否同步:
[root@controller1 ~]# mysql -uglance -p04aea9de5f79  -e "use glance;show tables;"
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| alembic_version                  |
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
| metadef_namespace_resource_types |
| metadef_namespaces               |
| metadef_objects                  |
| metadef_properties               |
| metadef_resource_types           |
| metadef_tags                     |
| migrate_version                  |
| task_info                        |
| tasks                            |
+----------------------------------+
-------------------------------------------------------------------------------------------
 
启动glance服务并设置开机启动:
systemctl enable openstack-glance-api 
systemctl start openstack-glance-api
 
 
-------------------------------------------------------------------------------------------
 
监听端口:   api:9292
 
[root@controller1 ~]# netstat -tnlp|grep python
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      15712/python2
-------------------------------------------------------------------------------------------
 
[root@controller1 ~]#  glance image-list
+----+------+
| ID | Name |
+----+------+
+----+------+
 
如果执行glance image-list命令出现以上画面则表示glance安装成功了。
 
 
注:如果出现如下报错示范,一般是/etc/glance/glance-api.conf或者/etc/glance/glance-registry.conf里 www_authenticate_uri和 www_authenticate_uri的配置有错误,在
Ocata版以前 www_authenticate_uri=http://10.1.36.28:5000  ,Ocata版及以后为http://10.1.36.28:5000/v3
 
[root@controller1 ~]# openstack image list
Internal Server Error (HTTP 500)
 
 
拓展:
glance image-list 和openstack image list命令的效果是一样的
 
 
---------------------------------------------------------------------------------------------------
 
glance验证操作
 
下载源镜像:
 
openstack image create "cirros3.5"  --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+----------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                    |
+------------------+----------------------------------------------------------------------------------------------------------+
| checksum         | f8ab98ff5e73ebab884d80c9dc9c7290                                                                         |
| container_format | bare                                                                                                     |
| created_at       | 2020-05-09T08:23:30Z                                                                                     |
| disk_format      | qcow2                                                                                                    |
| file             | /v2/images/67a2878e-1faf-415b-afc2-c48741dc9a24/file                                                     |
| id               | 67a2878e-1faf-415b-afc2-c48741dc9a24                                                                     |
| min_disk         | 0                                                                                                        |
| min_ram          | 0                                                                                                        |
| name             | cirros3.5                                                                                                |
| owner            | 445adc5d8a7e49a693530192fb8fb4c2                                                                         |
| properties       | direct_url='rbd://b071b40f-44e4-4a25-bdb3-8b654e4a429a/images/67a2878e-1faf-415b-afc2-c48741dc9a24/snap' |
| protected        | False                                                                                                    |
| schema           | /v2/schemas/image                                                                                        |
| size             | 13267968                                                                                                 |
| status           | active                                                                                                   |
| tags             |                                                                                                          |
| updated_at       | 2020-05-09T08:23:33Z                                                                                     |
| virtual_size     | None                                                                                                     |
| visibility       | public                                                                                                   |
+------------------+----------------------------------------------------------------------------------------------------------+
 
 
------------------------------------------------------------------------------------------------
 
查看镜像:
 
[root@controller1 ~]# openstack image list
+--------------------------------------+-----------+--------+
| ID                                   | Name      | Status |
+--------------------------------------+-----------+--------+
| 67a2878e-1faf-415b-afc2-c48741dc9a24 | cirros3.5 | active |
+--------------------------------------+-----------+--------+
 
[root@controller1 ~]# glance image-list        
+--------------------------------------+-----------+
| ID                                   | Name      |
+--------------------------------------+-----------+
| 67a2878e-1faf-415b-afc2-c48741dc9a24 | cirros3.5 |
+--------------------------------------+-----------+
 
镜像存放位置:
[root@controller1 ~]# rbd ls images
67a2878e-1faf-415b-afc2-c48741dc9a24
 
 
注:关于glance服务的高可用,我们可以把controller1这个控制节点下的/etc/glance目录直接打包压缩,拷贝到其他控制节点上,解压后直接启动openstack-glance-api和openstack-glance-registry服务,haproxy节点配置好,就可以做到glance服务的高可用。其他服务也是这么操作的,有些配置文件关于主机IP的地方注意修改下就好。
 
------------------------------------------------------------------------------------------------
 
第五章 Openstack计算服务Nova
 
Nova控制节点(openstack虚拟机必备组件:keystone,glance,nova,neutron)
 
API:负责接收和响应外部请求,支持openstack API,EC2API
Cert:负责身份认证
Scheduler:用于云主机调度
Conductor:计算节点访问数据的中间件
Consoleleauth:用于控制台的授权验证
Novncproxy:VNC代理
Nova API组件实现了RESTful API功能,是外部访问Nova的唯一途径。
 
接收外部请求并通过Message Queue将请求发送给其他的服务组件,同时也兼容EC2 API,所以也可以用EC2的管理
工具对nova进行日常管理。
 
Nova Scheduler模块在openstack中的作用就是决策虚拟机创建在哪个主机(计算节点)上。
决策一个虚机应该调度到某物理节点,需要分两个步骤:
 
         过滤(Fliter)             计算权值(Weight)
 
Fliter Scheduler首先得到未经过滤的主机列表,然后根据过滤属性,选择符合条件的计算节点主机。
经过主机过滤后,需要对主机进行权值的计算,根据策略选择相应的某一台主机(对于每一个要创建的虚拟机而言)
 

1.先决条件
[root@controller1 ~]# source admin-openstack.sh
 
nova服务创建:
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://10.1.36.28:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://10.1.36.28:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://10.1.36.28:8774/v2.1
 
2.Nova控制节点部署    controller1
首先我们需要先在控制节点部署除nova-compute之外的其它必备的服务。
安装nova控制节点:
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
在``[DEFAULT]``部分,只启用计算和元数据API:
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
在``[api_database]``和``[database]``部分,配置数据库的连接:
[api_database]
...
[database]
...
 
在 “[DEFAULT]” 部分,配置 “RabbitMQ” 消息队列访问:
[DEFAULT]
...
 
 
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
 
注解
在 [keystone_authtoken] 中注释或者删除其他选项。
 
 
注:如果不配置my_ip选项,那么后面配置中有$my_ip的部分请变更为控制器节点的管理接口ip
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
注解
默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,你必须使用``nova.virt.firewall.NoopFirewallDriver``防火墙服务来禁用掉计算服务内置的防火墙服务
在``[vnc]``部分,配置VNC代理使用控制节点的管理接口IP地址 :
 
[vnc]
...
enabled  =  true
server_listen=0.0.0.0
server_proxyclient_address=10.1.36.28
 
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
...
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
 
在该[placement]部分中,配置Placement API:
 
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = 04aea9de5f79
 
 
配置nova.conf文件
 
# grep -v "^#\|^$"  /etc/nova/nova.conf
 
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
[api]
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://nova:04aea9de5f79@10.1.36.28/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
manager=nova.conductor.manager.ConductorManager
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:04aea9de5f79@10.1.36.28/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://10.1.36.28:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url =  http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
server_listen=0.0.0.0
server_proxyclient_address=10.1.36.28
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
 
同步 nova-api数据库:
su -s /bin/sh -c "nova-manage api_db sync" nova
 
 注意
忽略此输出中的任何弃用消息。
注册cell0数据库:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
 
创建cell1单元格:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
 
填充nova数据库:
su -s /bin/sh -c "nova-manage db sync" nova
 
验证nova cell0和cell1是否正确注册:
# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
|  Name |                 UUID                 |           Transport URL            |               Database Connection               | Disabled |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@10.1.36.28/nova_cell0 |  False   |
| cell1 | 7244de69-18a7-4213-9bcc-f04d3d329e8e | rabbit://openstack:****@10.1.36.28 |    mysql+pymysql://nova:****@10.1.36.28/nova    |  False   |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
 
查看nova和nova_api,nova_cell0数据库是否写入成功
# mysql -unova -p'04aea9de5f79' -e "use nova_api;show tables;"
+------------------------------+
| Tables_in_nova_api           |
+------------------------------+
| aggregate_hosts              |
| aggregate_metadata           |
| aggregates                   |
| allocations                  |
.
.
.
| resource_provider_traits     |
| resource_providers           |
| traits                       |
| users                        |
+------------------------------+
# mysql -unova -p'04aea9de5f79' -e "use nova;show tables;"
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| allocations                                |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
| certificates                               |
| compute_nodes                              |
.
.
.
| shadow_volume_usage_cache                  |
| snapshot_id_mappings                       |
| snapshots                                  |
| tags                                       |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
+--------------------------------------------+
# mysql -unova -p'04aea9de5f79' -e "use nova_cell0;show tables;"
+--------------------------------------------+
| Tables_in_nova_cell0                       |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| allocations                                |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
.
.
.
| shadow_snapshots                           |
| shadow_task_log                            |
| shadow_virtual_interfaces                  |
| shadow_volume_id_mappings                  |
| shadow_volume_usage_cache                  |
| snapshot_id_mappings                       |
| snapshots                                  |
| tags                                       |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
+--------------------------------------------+
 
完成安装
启动Compute服务并将其配置为在系统引导时启动:
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
 
验证服务是否起来
# openstack compute service list
+----+------------------+-------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host        | Zone     | Status  | State | Updated At                 |
+----+------------------+-------------+----------+---------+-------+----------------------------+
|  3 | nova-consoleauth | controller1 | internal | enabled | up    | 2020-05-16T01:39:06.000000 |
|  6 | nova-scheduler   | controller1 | internal | enabled | up    | 2020-05-16T01:39:12.000000 |
| 18 | nova-conductor   | controller1 | internal | enabled | up    | 2020-05-16T01:39:13.000000 |
+----+------------------+-------------+----------+---------+-------+----------------------------+
 
N版以后nova部分改动较大,参考文档:https://docs.openstack.org/nova/stein/install/controller-install-rdo.html
 
安装和配置Placement
 
在服务目录中创建Placement API条目
openstack service create --name placement --description "Placement API" placement
创建Placement API服务端点
openstack endpoint create --region RegionOne placement public http://10.1.36.28:8778
openstack endpoint create --region RegionOne placement internal http://10.1.36.28:8778
openstack endpoint create --region RegionOne placement admin http://10.1.36.28:8778
 
安装软件包
yum install -y openstack-placement-api
 
编辑/etc/placement/placement.conf文件并完成以下操作:
 
在该[placement_database]部分中,配置数据库访问:
[placement_database]
# ...
替换PLACEMENT_DBPASS为您为展示位置数据库选择的密码。
在[api]和[keystone_authtoken]部分中,配置身份服务访问:
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 04aea9de5f79
替换PLACEMENT_PASS为您placement在身份服务中为用户选择的密码 。
注意
注释掉或删除此[keystone_authtoken] 部分中的任何其他选项。
注意
的价值user_name,password,project_domain_name并 user_domain_name需要在你的梯形配置同步。
 
 
配置placement.conf文件
# grep -v "^#\|^$" /etc/placement/placement.conf
[DEFAULT]
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 04aea9de5f79
[placement]
[placement_database]
connection = mysql+pymysql://placement:04aea9de5f79@10.1.36.28/placement
填充placement数据库:
# su -s /bin/sh -c "placement-manage db sync" placement
 
查看数据库是否导入成功
[root@controller1 ~]# mysql -e 'use  placement;show tables;'
+------------------------------+
| Tables_in_placement          |
+------------------------------+
| alembic_version              |
| allocations                  |
| consumers                    |
| inventories                  |
| placement_aggregates         |
| projects                     |
| resource_classes             |
| resource_provider_aggregates |
| resource_provider_traits     |
| resource_providers           |
| traits                       |
| users                        |
+------------------------------+
由于打包错误,您必须通过将以下配置添加到以下内容来启用对Placement API的访问/etc/httpd/conf.d/00-placement-api.conf
 
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
 
00-placement-api.conf 的配置示范
Listen 0.0.0.0:8778
 
<VirtualHost *:8778>
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
  WSGIScriptAlias / /usr/bin/placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/placement/placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
</VirtualHost>
 
Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>
 
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
 
重启 httpd服务:
systemctl restart httpd memcached
 
校验安装
执行状态检查命令
[root@controller1 ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
 
# nova-status upgrade check
+--------------------------------------------------------------------+
| Upgrade Check Results                                              |
+--------------------------------------------------------------------+
| Check: Cells v2                                                    |
| Result: Success                                                    |
| Details: No host mappings or compute nodes were found. Remember to |
|   run command 'nova-manage cell_v2 discover_hosts' when new        |
|   compute hosts are deployed.                                      |
+--------------------------------------------------------------------+
| Check: Placement API                                               |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Ironic Flavor Migration                                     |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Request Spec Migration                                      |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Console Auths                                               |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
 
--------------------------------------------------------------------------------------------------------------------------------------------
 
第六章 Openstack网络服务Neutron
 
生产环境,假设我们的openstack是公有云,我们一般的linuxbridge结合vlan的模式相对于大量的用户来说是vlan是不够用的,于是我们引进vxlan技术解决云主机内网网络通讯的问题。
我们的物理服务器一般有4个网络网卡,一个是远控卡,一个是管理网卡(物理机之间相互通讯和管理使用),一个用于云主机外网通讯(交换机与其对接是trunk口,云主机通过物理机的vlan与不同外网对接),最后一个是云主机内网通讯使用(交换机与其对接是access口,并且配置好IP好被vxlan调用)。
 
 
 
1.先决条件
注册neutron网络服务:
 
[root@controller1 ~]# source admin-openstack.sh
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://10.1.36.28:9696
openstack endpoint create --region RegionOne network internal http://10.1.36.28:9696
openstack endpoint create --region RegionOne network admin http://10.1.36.28:9696
 
2.配置网络选项
 
Neutron在控制节点部署  controller1
[root@controller1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
 
Neutron控制节点配置  controller1
编辑/etc/neutron/neutron.conf文件并完成如下操作:
 
在 [database] 部分,配置数据库访问:
 
[database]
...
 
 
在该[DEFAULT]部分中,启用模块化第2层(ML2)插件并禁用其他插件:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
 
 
在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
 
 
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
 
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
www_authenticate_uri = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
auth_type = password
 
注解
 
在 [keystone_authtoken] 中注释或者删除其他选项。
在``[DEFAULT]``和``[nova]``部分,配置网络服务来通知计算节点的网络拓扑变化:
[DEFAULT]
...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
...
 
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 04aea9de5f79
 
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
 
# grep -v "^#\|^$" /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =  neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
auth_strategy = keystone
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:04aea9de5f79@10.1.36.28/neutron
[keystone_authtoken]
www_authenticate_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
auth_type = password
[matchmaker_redis]
[nova]
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 04aea9de5f79
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
quota_network = 200
quota_subnet = 200
quota_port = 5000
quota_driver = neutron.db.quota.driver.DbQuotaDriver
quota_router = 100
quota_floatingip = 1000
quota_security_group = 100
quota_security_group_rule = 1000
[ssl]
 
 
配置 Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
 
编辑 /etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:
 
在``[ml2]``部分,启用flat和VLAN网络:
 
[ml2]
...
type_drivers = flat,vlan,gre,vxlan,geneve
在``[ml2]``部分,禁用私有网络:
 
[ml2]
...
tenant_network_types = vxlan
在``[ml2]``部分,启用Linuxbridge机制:
 
[ml2]
...
mechanism_drivers = linuxbridge,l2population
警告
 
在你配置完ML2插件之后,删除可能导致数据库不一致的``type_drivers``项的值。
在``[ml2]`` 部分,启用端口安全扩展驱动:
[ml2]
...
extension_drivers = port_security
在``[ml2_type_flat]``部分,配置公共虚拟网络为flat网络
[ml2_type_flat]
...
flat_networks = external
在 ``[securitygroup]``部分,启用 ipset 增加安全组规则的高效性:
[securitygroup]
...
enable_ipset = true
 
[root@controller1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = default
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = default:1:4000
[ml2_type_vxlan]
vni_ranges = 1:2000
[securitygroup]
enable_ipset = true
 
配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
 
编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作:
 
在该[linux_bridge]部分中,将提供者虚拟网络映射到提供者物理网络接口:
[linux_bridge]
physical_interface_mappings = default:eth1
 
在``[vxlan]``部分
 
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.21
在 ``[securitygroup]``部分,启用安全组并配置 Linuxbridge iptables firewall driver:
[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 
[root@controller1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = default:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.21
 
配置DHCP代理
The DHCP agent provides DHCP services for virtual networks.
 
编辑/etc/neutron/dhcp_agent.ini文件并完成下面的操作:
 
在``[DEFAULT]``部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据
 
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
 
[root@controller1 ~]# grep -v "^#\|^$" /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[ovs]
 
 
配置元数据代理
The :term:`metadata agent <Metadata agent>`负责提供配置信息,例如:访问实例的凭证
编辑``/etc/neutron/metadata_agent.ini``文件并完成以下操作:
在``[DEFAULT]`` 部分,配置元数据主机以及共享密码:
[DEFAULT]
...
nova_metadata_ip = 10.1.36.28
metadata_proxy_shared_secret = 04aea9de5f79
 
# grep -v '^#\|^$' /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = 10.1.36.28
metadata_proxy_shared_secret = 04aea9de5f79
[cache]
 
配置l3
 
# grep -v '^#\|^$' /etc/neutron/l3_agent.ini
[DEFAULT]
ovs_use_veth = False
interface_driver = linuxbridge
debug = True
 
完成安装
网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超链接不存在,使用下面的命令创建它:
 
[root@controller1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
 
同步数据库:
 
[root@controller1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
注解
 
数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件。
 
为计算节点配置网络服务
编辑/etc/nova/nova.conf文件并完成下面的操作:
 
在``[neutron]`` 部分,配置访问参数,启用元数据代理并设置密码:
 
[neutron]
...
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
 
重启计算API 服务:
 
[root@controller1 ~]# systemctl restart openstack-nova-api.service
当系统启动时,启动 Networking 服务并配置它启动。
 
对于两种网络选项:
 
[root@controller1 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@controller1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
对于网络选项2,同样启用layer-3服务并设置其随系统自启动
 
[root@controller1 ~]# systemctl enable neutron-l3-agent.service
[root@controller1 ~]# systemctl start neutron-l3-agent.service
 
 
检验nentron在控制节点是否OK
[root@controller1 ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 36134331-0c29-4eaa-b287-93e69836d419 | DHCP agent         | controller1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 67b10d2b-2438-40e1-8402-70219cd5100c | Metadata agent     | controller1 | None              | :-)   | UP    | neutron-metadata-agent    |
| 6e40171c-6be3-49a7-93d0-ee54ce831025 | Linux bridge agent | controller1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 7fbb4072-6358-4cf6-8b6e-9631bb0c9eac | L3 agent           | controller1 | nova              | :-)   | UP    | neutron-l3-agent          |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
 
 
 
终极检验示范:
[root@controller1 ~]# openstack extension list --network
 
 
 
----------------------------------------------------------------------------------------------------------------
 
第七章 Openstack管理服务Horizon
 
安装软件包:
 
# yum install openstack-dashboard -y
编辑文件 /etc/openstack-dashboard/local_settings 并完成如下动作:
 
在 controller 节点上配置仪表盘以使用 OpenStack 服务:
 
OPENSTACK_HOST = "10.1.36.28"
允许所有主机访问仪表板:
 
ALLOWED_HOSTS = ['*', ]
配置 memcached 会话存储服务:
 
#SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': '10.1.36.28:11211',
    }
}
 
 
启用第3版认证API:
 
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
启用对域的支持
 
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
配置API版本:
 
OPENSTACK_API_VERSIONS = {
     "identity": 3,
     "volume": 3,
     "image": 2,
     "compute": 2,
}
通过仪表盘创建用户时的默认域配置为 default :
 
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
通过仪表盘创建的用户默认角色配置为 user :
 
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
如果您选择网络参数1,禁用支持3层网络服务:
 
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': True,
    'enable_ha_router': True,
    'enable_lb': True,
    'enable_firewall': True,
    'enable_vpn': True,
    'enable_fip_topology_check': True,
}
可以选择性地配置时区:
 
TIME_ZONE = "Asia/Shanghai"
 
最终配置示范:
# grep -v '#\|^$' /etc/openstack-dashboard/local_settings
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
WEBROOT = '/dashboard/'
ALLOWED_HOSTS = ['*', ]
OPENSTACK_API_VERSIONS = {
     "identity": 3,
     "volume": 2,
     "image": 2,
     "compute": 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
LOCAL_PATH = '/tmp'
SECRET_KEY='3f508e8a4399dffa3323'
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '10.1.36.21:11211',
    },
}
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "10.1.36.21"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_BACKEND = {
    'name': 'native',
    'can_edit_user': True,
    'can_edit_group': True,
    'can_edit_project': True,
    'can_edit_domain': True,
    'can_edit_role': True,
}
LAUNCH_INSTANCE_DEFAULTS = {
    'config_drive': False,
    'enable_scheduler_hints': True,
    'disable_image': False,
    'disable_instance_snapshot': False,
    'disable_volume': False,
    'disable_volume_snapshot': False,
    'create_volume': False,
}
OPENSTACK_HYPERVISOR_FEATURES = {
    'can_set_mount_point': False,
    'can_set_password': True,
    'requires_keypair': False,
    'enable_quotas': True
}
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': True,
}
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': True,
    'enable_ha_router': True,
    'enable_lb': True,
    'enable_firewall': True,
    'enable_vpn': True,
    'enable_fip_topology_check': True,
}
OPENSTACK_HEAT_STACK = {
    'enable_user_pass': True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
    "architecture": _("Architecture"),
    "kernel_id": _("Kernel ID"),
    "ramdisk_id": _("Ramdisk ID"),
    "image_state": _("Euca2ools state"),
    "project_id": _("Project ID"),
    "image_type": _("Image Type"),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
INSTANCE_LOG_LENGTH = 35
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = "Asia/Shanghai"
POLICY_FILES_PATH = '/etc/openstack-dashboard'
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'operation': {
            'format': '%(asctime)s %(message)s'
        },
    },
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
        },
        'operation': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'operation',
        },
    },
    'loggers': {
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'horizon.operation_log': {
            'handlers': ['operation'],
            'level': 'INFO',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'heatclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'nose.plugins.manager': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25',
        'to_port': '25',
    },
    'dns': {
        'name': 'DNS',
        'ip_protocol': 'tcp',
        'from_port': '53',
        'to_port': '53',
    },
    'http': {
        'name': 'HTTP',
        'ip_protocol': 'tcp',
        'from_port': '80',
        'to_port': '80',
    },
    'pop3': {
        'name': 'POP3',
        'ip_protocol': 'tcp',
        'from_port': '110',
        'to_port': '110',
    },
    'imap': {
        'name': 'IMAP',
        'ip_protocol': 'tcp',
        'from_port': '143',
        'to_port': '143',
    },
    'ldap': {
        'name': 'LDAP',
        'ip_protocol': 'tcp',
        'from_port': '389',
        'to_port': '389',
    },
    'https': {
        'name': 'HTTPS',
        'ip_protocol': 'tcp',
        'from_port': '443',
        'to_port': '443',
    },
    'smtps': {
        'name': 'SMTPS',
        'ip_protocol': 'tcp',
        'from_port': '465',
        'to_port': '465',
    },
    'imaps': {
        'name': 'IMAPS',
        'ip_protocol': 'tcp',
        'from_port': '993',
        'to_port': '993',
    },
    'pop3s': {
        'name': 'POP3S',
        'ip_protocol': 'tcp',
        'from_port': '995',
        'to_port': '995',
    },
    'ms_sql': {
        'name': 'MS SQL',
        'ip_protocol': 'tcp',
        'from_port': '1433',
        'to_port': '1433',
    },
    'mysql': {
        'name': 'MYSQL',
        'ip_protocol': 'tcp',
        'from_port': '3306',
        'to_port': '3306',
    },
    'rdp': {
        'name': 'RDP',
        'ip_protocol': 'tcp',
        'from_port': '3389',
        'to_port': '3389',
    },
}
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
                              'LAUNCH_INSTANCE_DEFAULTS',
                              'OPENSTACK_IMAGE_FORMATS',
                              'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN']
ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []}
 
完成安装
默认安装的httpd运行模式是prefork
[root@node1 ~]# httpd -V
Server version: Apache/2.4.6 (CentOS)
Server built:   Jul 29 2019 17:18:49
Server's Module Magic Number: 20120211:24
Server loaded:  APR 1.4.8, APR-UTIL 1.5.2
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture:   64-bit
Server MPM:     prefork
  threaded:     no
    forked:     yes (variable process count)
httpd2.4切换成event模型,需要修改配置文件/etc/httpd/conf.modules.d/00-mpm.conf,内容如下:
LoadModule mpm_event_module modules/mod_mpm_event.so
[root@node1 ~]# httpd -V
Server version: Apache/2.4.6 (CentOS)
Server built:   Jul 29 2019 17:18:49
Server's Module Magic Number: 20120211:24
Server loaded:  APR 1.4.8, APR-UTIL 1.5.2
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture:   64-bit
Server MPM:     event
  threaded:     yes (fixed thread count)
    forked:     yes (variable process count)
重启web服务器以及会话存储服务:
[root@controller1 ~]# systemctl restart httpd.service memcached.service
 
验证仪表盘的操作。
 
在浏览器中输入 http://10.1.36.28/dashboard或访问仪表盘。
 
验证使用 admin 或者``demo``用户凭证和``default``域凭证。
 
 
 
 
 
计算节点的相关服务的安装与配置
 
Nova计算节点部署 compute1
nova-compute一般运行在计算节点上,通过message queue接收并管理VM的生命周期
nova-compute通过libvirt管理KVM,通过XenAPI管理Xen
 
基础软件包安装
基础软件包需要在所有的OpenStack节点上进行安装,包括控制节点和计算节点。
提前安装好常用软件
yum install -y vim net-tools wget lrzsz tree screen lsof tcpdump nmap bridge-utils
 
1.安装EPEL仓库
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
 
2.安装OpenStack仓库
OpenStack stein,目前CentOS7.6版本只支持stein、rocky、stein、train四个版本,我们选择次新版的stein版本
 
从stein版本后版本都是直接centos基础源extras里了,可以直接yum
# yum search openstack | grep release
centos-release-openstack-queens.noarch : OpenStack from the CentOS Cloud SIG
centos-release-openstack-rocky.noarch : OpenStack from the CentOS Cloud SIG repo
centos-release-openstack-stein.noarch : OpenStack from the CentOS Cloud SIG repo
centos-release-openstack-train.noarch : OpenStack from the CentOS Cloud SIG repo
 
# yum install centos-release-openstack-stein -y
 
3.安装OpenStack客户端
yum install -y python-openstackclient
4.安装openstack SELinux管理包
yum install -y openstack-selinux
 
5.时间同步
安装网络守时服务
Openstack节点之间必须时间同步,不然可能会导致创建云主机不成功。
# yum install chrony -y
# vim /etc/chrony.conf #修改NTP配置
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
 
# systemctl enable chronyd.service#设置NTP服务开机启动
# systemctl start chronyd.service#启动NTP对时服务
# chronyc sources#验证NTP对时服务
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? ControllerNode                0   6     0     -     +0ns[   +0ns] +/-    0ns
 
设置时区
timedatectl set-timezone Asia/Shanghai
 
部署和配置nova-compute
 
[root@compute1 ~]# yum install -y openstack-nova-compute 
 
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
 
在该[DEFAULT]部分中,仅启用计算和元数据API:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
 
在[DEFAULT]部分,配置``RabbitMQ``消息队列的连接:
[DEFAULT]
...
 
注: Openstack N版以后不在支持rpc_backend设置
在   [api] 和 [keystone_authtoken] 部分,配置认证服务访问:
[api]
...
auth_strategy = keystone
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 04aea9de5f79
 
注解
在 [keystone_authtoken] 中注释或者删除其他选项。
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
注解
缺省情况下,Compute 使用内置的防火墙服务。由于 Networking 包含了防火墙服务,所以你必须通过使用 nova.virt.firewall.NoopFirewallDriver 来去除 Compute 内置的防火墙服务。
 
在``[vnc]``部分,启用并配置远程控制台访问:
 
[vnc]
...
enabled = true
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.1.36.24
 
服务器组件监听所有的 IP 地址,而代理组件仅仅监听计算节点管理网络接口的 IP 地址。基本的 URL 指示您可以使用 web 浏览器访问位于该计算节点上实例的远程控制台的位置。
 
注解
如果你运行浏览器的主机无法解析``controller`` 主机名,你可以将 ``controller``替换为你控制节点管理网络的IP地址。
在 [glance] 区域,配置镜像服务 API 的位置:
 
[glance]
...
在 [oslo_concurrency] 部分,配置锁路径:
 
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
 
在该[placement]部分中,配置Placement API:
 
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = 04aea9de5f79
 
 
[root@compute1 ~]# grep -v '^#\|^$' /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.1.36.28:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
virt_type = kvm
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen=0.0.0.0
server_proxyclient_address= 10.1.36.24
novncproxy_base_url = http://10.1.36.28:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
 
 
完成安装
确定您的计算节点是否支持虚拟机的硬件加速。
 
$ egrep -c '(vmx|svm)' /proc/cpuinfo
如果这个命令返回了 one or greater 的值,那么你的计算节点支持硬件加速且不需要额外的配置。
 
如果这个命令返回了 zero 值,那么你的计算节点不支持硬件加速。你必须配置 libvirt 来使用 QEMU 去代替 KVM
在 /etc/nova/nova.conf 文件的 [libvirt] 区域做出如下的编辑:
[libvirt]
...
virt_type = qemu
 
启动计算服务及其依赖,并将其配置为随系统自动启动:
 
[root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service
 
ceph和nova的结合
 
安装前我们配置下yum源,这里使用的是较新的nautilus版本
[root@compute1 ~]#  cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
 
安装ceph-common
[root@compute1 ~]#  yum install ceph-common -y
 
[root@compute1 ~]# rpm -qa | grep ceph-common
ceph-common-14.2.9-0.el7.x86_64
 
注:其他从ceph集群中获取到ceph.conf和ceph.client.cinder.keyring文件并放到nova-compute所在节点的/etc/ceph/目录下
[root@compute1 ceph]# ls -lh /etc/ceph/
total 8.0K
-rwxrwxrwx 1 nova nova   64 May 22 09:21 ceph.client.cinder.keyring
-rwxrwxrwx 1 nova nova 1.5K May 22 09:44 ceph.conf
由于不知道具体的权限问题,这里直接给了最大权限chmod -R 777 /etc/ceph && chown -R nova.nova /etc/ceph/
/etc/ceph目录和其下文件权限不够会导致报错,报错内容:ERROR nova.compute.manager PermissionDeniedError: [errno 13] error calling conf_read_file
 
推送client.cinder.key给计算节点compute1
[root@ceph-host-01 ceph-cluster]# ceph auth get-key client.cinder | ssh compute1 tee client.cinder.key
 
libvirt秘钥
nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群;
# 在ceph的admin节点向计算节点推送client.cinder秘钥文件,生成的文件是临时性的,将秘钥添加到libvirt后可删除
# 在计算节点将秘钥加入libvirt,以node3节点为例;
# 首先生成1个uuid,全部计算和cinder节点可共用此uuid(其他节点不用操作此步);
# uuid后续配置nova.conf文件时也会用到,请保持一致
[root@compute1 ~]# uuidgen
2b706e33-609e-4542-9cc5-1a01703a292f
# 在libvirt上添加秘钥
[root@compute1 ~]# vim secret.xml
<secret ephemeral='no' private='no'>
     <uuid>2b706e33-609e-4542-9cc5-1a01703a292f</uuid>
     <usage type='ceph'>
         <name>client.cinder secret</name>
     </usage>
</secret>
[root@compute1 ~]# virsh secret-define --file secret.xml
Secret 2b706e33-609e-4542-9cc5-1a01703a292f created
 
[root@compute1 ~]# virsh secret-set-value --secret 2b706e33-609e-4542-9cc5-1a01703a292f --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
 
注: --base64 后面的是/etc/ceph/ceph.client.nova.keyring里的key
[root@compute1 ~]# cat client.cinder.key
AQC37MRe3U6XHhAA4AUWhAlyh8bUqrMny1X8bw==
 
配置ceph.conf
# 如果需要从ceph rbd中启动虚拟机,必须将ceph配置为nova的临时后端;
# 推荐在计算节点的配置文件中启用rbd cache功能;
# 为了便于故障排查,配置admin socket参数,这样每个使用ceph rbd的虚拟机都有1个socket将有利于虚拟机性能分析与故障解决;
# 相关配置只涉及全部计算节点ceph.conf文件的[client]与[client.cinder]字段,以compute01节点为例
[root@compute1 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 3948cba4-b0fa-4e61-84f5-3cec08dd5859
mon_initial_members = ceph-host-01, ceph-host-02, ceph-host-03
mon_host = 10.1.36.11,10.1.36.12,10.1.36.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
 
mon clock drift allowed = 2
mon clock drift warn backoff = 30
 
public_network = 10.1.36.0/16
cluster_network = 192.168.36.0/24
 
max_open_files = 131072
mon_pg_warn_max_per_osd = 1000
osd pool default pg num = 256
osd pool default pgp num = 256
osd pool default size = 2
osd pool default min size = 1
 
mon_osd_full_ratio = .90
mon_osd_nearfull_ratio = .80
osd_deep_scrub_randomize_ratio = 0.01
 
[mon]
mon_allow_pool_delete = true
mon_osd_down_out_interval = 600
mon_osd_min_down_reporters = 3
[mgr]
mgr modules = dashboard
[osd]
osd_journal_size = 20480
osd_max_write_size = 1024
osd mkfs type = xfs
osd_recovery_op_priority = 1
osd_recovery_max_active = 1
osd_recovery_max_single_start = 1
osd_recovery_threads = 1
osd_recovery_max_chunk = 1048576
osd_max_backfills = 1
osd_scrub_begin_hour = 22
osd_scrub_end_hour = 7
osd_recovery_sleep = 0
 
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true
rbd_concurrent_management_ops = 10
rbd_cache_size = 67108864
rbd_cache_max_dirty = 50331648
rbd_cache_target_dirty = 33554432
rbd_cache_max_dirty_age = 2
rbd_default_format = 2
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring
# 创建ceph.conf文件中指定的socker与log相关的目录,并更改属主
[root@compute1 ~]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[root@compute1 ~]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/
注:生产环境发现/var/run/ceph/guests目录老是会在服务器重启后消失,并导致计算节点不可用(无法创建和删除云主机),所以我在下方写了一个定时检测并创建/var/run/ceph/guests/目录的任务
echo '*/3 * * * * root if [ ! -d /var/run/ceph/guests/ ] ;then mkdir -pv /var/run/ceph/guests/ /var/log/qemu/ && chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/ && systemctl restart libvirtd.service openstack-nova-compute.service ;fi' >>/etc/crontab
 
# 在全部计算节点配置nova后端使用ceph集群的vms池
修改/etc/nova/nova.conf文件添加以下部分
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
# uuid前后一致
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"# 禁用文件注入
inject_password = false
inject_key = false
inject_partition = -2
# 虚拟机临时root磁盘discard功能,”unmap”参数在scsi接口类型磁盘释放后可立即释放空间
hw_disk_discard = unmap
# 原有配置
virt_type=kvm
[root@compute1 ~]# cat /etc/nova/nova.conf
[DEFAULT]
debug = True
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
use_neutron=True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver=libvirt.LibvirtDriver
allow_resize_to_same_host = true
vif_plugging_is_fatal = False
vif_plugging_timeout = 0
live_migration_retry_count = 30
[api]
auth_strategy = keystone
use_forwarded_for = true
[api_database]
[barbican]
[cache]
[cells]
[cinder]
catalog_info = volumev3:cinderv3:internalURL
os_region_name = RegionOne
[compute]
[conductor]
workers = 5
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.1.36.28:9292
num_retries = 3
debug = True
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers = 10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name =  Default
user_domain_name =  Default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
disk_cachemodes = "network=writeback"
hw_disk_discard = unmap
virt_type = kvm
[metrics]
[mks]
[neutron]
url = http://10.1.36.28:9696
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
compute = auto
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen=0.0.0.0
server_proxyclient_address= 10.1.36.25
novncproxy_base_url = http://10.1.36.28:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
 
重启计算服务及其依赖
 
[root@compute1 ~]# systemctl restart libvirtd.service openstack-nova-compute.service
 
注:重启nova服务后最好查看服务启动是否正常,如果openstack-nova-compute服务启动异常可以通过查看/var/log/nova/nova-compute.log日志排查
systemctl status libvirtd.service openstack-nova-compute.service
配置live-migration
修改/etc/libvirt/libvirtd.conf
# 在全部计算节点操作,以compute01节点为例;
# 以下给出libvirtd.conf文件的修改处所在的行num
[root@compute1 ~]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf
# 取消以下三行的注释
22:listen_tls = 0
33:listen_tcp = 1
45:tcp_port = "16509"# 取消注释,并修改监听端口
55:listen_addr = "0.0.0.0"# 取消注释,同时取消认证
158:auth_tcp = "none"
修改/etc/sysconfig/libvirtd
# 在全部计算节点操作,以compute01节点为例;
# 以下给出libvirtd文件的修改处所在的行num
[root@node3 ~]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
# 取消注释9:LIBVIRTD_ARGS="--listen"
设置iptables
# live-migration时,源计算节点主动连接目的计算节点tcp16509端口,可以使用”virsh -c qemu+tcp://{node_ip or node_name}/system”连接目的计算节点测试;
# 迁移前后,在源目计算节点上的被迁移instance使用tcp49152~49161端口做临时通信;
# 因虚拟机已经启用iptables相关规则,此时切忌随意重启iptables服务,尽量使用插入的方式添加规则;
# 同时以修改配置文件的方式写入相关规则,切忌使用”iptables saved”命令;
# 在全部计算节点操作,以compute01节点为例
[root@compute1 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 16509 -j ACCEPT
[root@compute1 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 49152:49161 -j ACCEPT
重启服务
# libvirtd与nova-compute服务都需要重启
[root@compute1 ~]# systemctl restart libvirtd.service openstack-nova-compute.service
# 查看服务
[root@compute1 ~]# netstat -tunlp | grep 16509
tcp        0      0 10.1.36.24:16509        0.0.0.0:*               LISTEN      13107/libvirtd
 
 
验证是否成功:
 
[root@controller1 ~]# source admin-openstack.sh
[root@controller1 ~]# openstack compute service list --service nova-compute
 
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  9 | nova-compute     | compute1 | nova     | enabled | up    | 2019-02-18T07:16:34.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
 
Discover compute hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 7244de69-18a7-4213-9bcc-f04d3d329e8e
Found 0 unmapped computes in cell: 7244de69-18a7-4213-9bcc-f04d3d329e8e
 
Note
When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in /etc/nova/nova.conf:
[scheduler]
discover_hosts_in_cells_interval = 300
 
或者使用下面的命令做验证
[root@controller1 ~]# openstack host list
+------------+-------------+----------+
| Host Name  | Service     | Zone     |
+------------+-------------+----------+
| controller1 | consoleauth | internal |
| controller1 | scheduler   | internal |
| controller1 | conductor   | internal |
| compute1 | compute     | nova     |
+------------+-------------+----------+
 
[root@controller1 ~]# nova service-list
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id                                   | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason | Forced down |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| 4790ca20-37c3-4fbf-92d1-72a7b584f6f6 | nova-consoleauth | controller1 | internal | enabled | up    | 2019-02-18T07:19:10.000000 | -               | False       |
| 69a69d43-98c3-436e-866b-03d7944d4186 | nova-scheduler   | controller1 | internal | enabled | up    | 2019-02-18T07:19:10.000000 | -               | False       |
| 14bb7cc2-0e80-4ef5-9f28-0775a69d7943 | nova-conductor   | controller1 | internal | enabled | up    | 2019-02-18T07:19:09.000000 | -               | False       |
| b20775d6-213e-403d-bfc5-2a3c3f6438e1 | nova-compute     | compute1 | nova     | enabled | up    | 2019-02-18T07:19:14.000000 | -               | False       |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
 
如果出现此四个服务则代表nova创建成功了
 
验证nova与glance的连接,如下说明成功
[root@controller1 ~]# openstack image list
+--------------------------------------+-----------------+--------+
| ID                                   | Name            | Status |
+--------------------------------------+-----------------+--------+
| 9560cd59-868a-43ec-8231-351c09bdfe5a | cirros3.4       | active |
+--------------------------------------+-----------------+--------+
 
[root@controller1 ~]# openstack image show d464af77-9588-43e7-a3d4-3f5f26000030
+------------------+--------------------------------------------------------------------------------------------+
| Field            | Value                                                                                      |
+------------------+--------------------------------------------------------------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                                                           |
| container_format | bare                                                                                       |
| created_at       | 2020-05-13T05:39:17Z                                                                       |
| disk_format      | qcow2                                                                                      |
| file             | /v2/images/9560cd59-868a-43ec-8231-351c09bdfe5a/file                                       |
| id               | 9560cd59-868a-43ec-8231-351c09bdfe5a                                                       |
| min_disk         | 0                                                                                          |
| min_ram          | 0                                                                                          |
| name             | cirros3.4                                                                                  |
| owner            | f004bf0d5c874f2c978e441bddfa2724                                                           |
| properties       | locations='[{u'url': u'rbd://3948cba4-b0fa-4e61-84f5-3cec08dd5859/images/9560cd59-868a-    |
|                  | 43ec-8231-351c09bdfe5a/snap', u'metadata': {}}]', os_hash_algo='sha512', os_hash_value='1b |
|                  | 03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f739d1b5eb4c4c |
|                  | a24d38fdc6f4f9d8247a2bc64db25d6bbdb2', os_hidden='False'                                   |
| protected        | False                                                                                      |
| schema           | /v2/schemas/image                                                                          |
| size             | 13287936                                                                                   |
| status           | active                                                                                     |
| tags             |                                                                                            |
| updated_at       | 2020-05-13T05:39:21Z                                                                       |
| virtual_size     | None                                                                                       |
| visibility       | public                                                                                     |
+------------------+--------------------------------------------------------------------------------------------+
 
注:由于到N版openstack时,nova image-list命令已经不支持了(变成glance image-list 或openstack image list),所以只能用上面的命令了
 
N版后官方推荐的验证办法:
# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | controller1 | internal | enabled | up    | 2019-02-18T07:21:30.000000 |
|  2 | nova-scheduler   | controller1 | internal | enabled | up    | 2019-02-18T07:21:40.000000 |
|  3 | nova-conductor   | controller1 | internal | enabled | up    | 2019-02-18T07:21:40.000000 |
|  9 | nova-compute     | compute1 | nova     | enabled | up    | 2019-02-18T07:21:34.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
 
验证nova与keystone的连接,如下说明成功
# openstack catalog list
 
# nova-status upgrade check
+---------------------------------------------------------------------+
| Upgrade Check Results                                               |
+---------------------------------------------------------------------+
| Check: Cells v2                                                     |
| Result: Success                                                     |
| Details: None                                                       |
+---------------------------------------------------------------------+
| Check: Placement API                                                |
| Result: Success                                                     |
| Details: None                                                       |
+---------------------------------------------------------------------+
| Check: Resource Providers                                           |
| Result: Success                                                     |
| Details: None                                                       |
+---------------------------------------------------------------------+
 
 
 
扩展:计算节点间的云主机迁移
迁移前一定要保证node3和node4之间可以ssh无密钥访问(计算节点间无密钥访问是云主机能迁移成功的关键),简单的实现示范如下
以ceph-host-04和ceph-host-02为例,其实过程就是在一台主机(ceph-host-04)上使用ssh-keygen生成密钥,再把/root/.ssh/id_rsa和/root/.ssh/id_rsa.pub以及/root/.ssh/id_rsa.pub内容复制到.ssh/authorized_keys,再把这3个文件(/root/.ssh/id_rsa,/root/.ssh/id_rsa.pub,.ssh/authorized_keys)文件拷贝给其他主机(包括ceph-host-04和ceph-host-02),这样可以使N台主机之间都能相互无密钥访问
 
[root@ceph-host-04 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:MGkIRd0B3Juv6+7OlNOknVGWGKXOulP4b/ddw+e+RDg root@ceph-host-04
The key's randomart image is:
+---[RSA 2048]----+
|  .ooo.+.....    |
|   . .o.o  + .   |
|    . =  oo +    |
|     . ooo o  .  |
|        So=  E . |
|        .Boo  +  |
|        *++    +o|
|       ooo. . o.=|
|       =Oo o.. +*|
+----[SHA256]-----+
[root@ceph-host-04 ~]# ssh-copy-id ceph-host-04
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph-host-04 (10.30.1.224)' can't be established.
ECDSA key fingerprint is SHA256:qjCvy9Q/qRV2HIT0bt6ev//3rOGVntxAPQRDZ4aXfEE.
ECDSA key fingerprint is MD5:99:db:b6:3d:83:0e:c2:56:25:47:f6:1b:d7:bd:f0:ce.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-host-04's password:
Number of key(s) added: 1
Now try logging into the machine, with:   "ssh 'ceph-host-04'"
and check to make sure that only the key(s) you wanted were added.
 
[root@ceph-host-04 ~]# ssh-copy-id ceph-host-02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-host-02's password:
Number of key(s) added: 1
Now try logging into the machine, with:   "ssh 'ceph-host-02'"
and check to make sure that only the key(s) you wanted were added.
 
[root@ceph-host-04 ~]# scp .ssh/id_rsa root@ceph-host-02:/root/.ssh/
id_rsa 
[root@ceph-host-04 ~]# ssh  ceph-host-02 w
01:23:10 up  5:20,  1 user,  load average: 0.12, 0.18, 0.36
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    desktop-l37krfr. 23:27    1:58   0.14s  0.14s -bash
[root@ceph-host-02 ~]# ssh  ceph-host-04 w
01:25:01 up  5:22,  1 user,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    desktop-l37krfr. 22:04    5.00s  0.26s  0.26s -bash
 
注:其实只要保障所有主机的/root/.ssh/id_rsa以及/root/.ssh/authorized_keys内容相同就可以了,/root/.ssh/id_rsa.pub反正内容是和/root/.ssh/authorized_keys一样
生产应用:openstack的计算节点root用户无密钥登陆(云主机在计算节点之间的迁移),参照简单的实现方式,新增的计算节点只要拷贝已经生成/root/.ssh/id_rsa文件和把/root/.ssh/id_rsa.pub内容复制追加到/root/.ssh/authorized_keys就可以实现计算节点间的无密钥登陆了。
 
Neutron计算节点配置  compute1
 
Neutron在计算节点中的部署  compute1
[root@compute1 ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset
 
安装用于监控数据包方面的conntrack-tools软件(可选)
[root@compute1 ~]# yum install -y conntrack-tools
neutron计算节点:(将neutron的配置文件拷贝到计算节点)
 
编辑/etc/neutron/neutron.conf文件并完成以下操作:
 
在该[database]部分中,注释掉任何connection选项,因为计算节点不直接访问数据库。
 
在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
 
[DEFAULT]
...
 
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
 
[DEFAULT]
...
auth_strategy = keystone
 
[keystone_authtoken]
...
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
 
在 [oslo_concurrency] 部分,配置锁路径:
 
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
 
 
# grep -v '^#\|^$' /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
[cors]
[database]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = neutron
password = 04aea9de5f79
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
配置网络选项
选择与您之前在控制节点上选择的相同的网络选项。之后,回到这里并进行下一步:为计算节点配置网络服务。
 
配置Linux网桥代理
Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组。
 
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
 
在本[linux_bridge]节中,将提供者虚拟网络映射到提供者物理网络接口:
 
[linux_bridge]
physical_interface_mappings = default:eth1
 
在该[vxlan]部分中,启动VXLAN覆盖网络:
 
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.24
在本[securitygroup]节中,启用安全组并配置Linux网桥iptables防火墙驱动程序:
 
[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 
 
[root@compute1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = default:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
l2_population = true
local_ip = 192.168.36.24
 
 
为计算节点配置网络服务
编辑/etc/nova/nova.conf文件并完成下面的操作:
 
在``[neutron]`` 部分,配置访问参数,启用元数据代理并设置密码:
[neutron]
...
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
 
 
[root@compute1 ~]#  grep -v "^#\|^$"  /etc/nova/nova.conf
[DEFAULT]
debug = True
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
use_neutron=True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver=libvirt.LibvirtDriver
allow_resize_to_same_host = true
vif_plugging_is_fatal = False
vif_plugging_timeout = 0
live_migration_retry_count = 30
[api]
auth_strategy = keystone
use_forwarded_for = true
[api_database]
[barbican]
[cache]
[cells]
[cinder]
catalog_info = volumev3:cinderv3:internalURL
os_region_name = RegionOne
[compute]
[conductor]
workers = 5
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.1.36.28:9292
num_retries = 3
debug = True
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers = 10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name =  Default
user_domain_name =  Default
project_name = service
username = nova
password = 04aea9de5f79
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
disk_cachemodes = "network=writeback"
hw_disk_discard = unmap
virt_type = kvm
[metrics]
[mks]
[neutron]
url = http://10.1.36.28:9696
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 04aea9de5f79
service_metadata_proxy = true
metadata_proxy_shared_secret = 04aea9de5f79
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.36.28:5000/v3
username = placement
password = 04aea9de5f79
[placement_database]
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
compute = auto
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen=0.0.0.0
server_proxyclient_address= 10.1.36.25
novncproxy_base_url = http://10.1.36.28:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
完成安装
重启计算服务:
 
[root@compute1 ~]# systemctl restart openstack-nova-compute.service
启动Linuxbridge代理并配置它开机自启动:
 
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service
 
检验nentron在计算节点是否OK
[root@controller1 ~]# source admin-openstack.sh
[root@controller1 ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 36134331-0c29-4eaa-b287-93e69836d419 | DHCP agent         | controller1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 67b10d2b-2438-40e1-8402-70219cd5100c | Metadata agent     | controller1 | None              | :-)   | UP    | neutron-metadata-agent    |
| 6e40171c-6be3-49a7-93d0-ee54ce831025 | Linux bridge agent | controller1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 7fbb4072-6358-4cf6-8b6e-9631bb0c9eac | L3 agent           | controller1 | nova              | :-)   | UP    | neutron-l3-agent          |
| c5fbf4e0-0d72-40b0-bb53-c383883a0d19 | Linux bridge agent | compute1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
 
代表计算节点的Linux bridge agent已成功连接到控制节点。
 
Openstack块存储服务Cinder
Cinder官方文档:https://docs.openstack.org/cinder
块存储服务(cinder)为实例提供块存储。存储的分配和消耗是由块存储驱动器,或者多后端配置的驱动器决定的。还有很多驱动程序可用:NAS/SAN,NFS,ISCSI,Ceph等。
安装并配置控制节点
数据库和授权在开始已经做过了,这里不再重复
要创建服务证书,完成这些步骤:
创建一个 cinder 用户:
[root@controller1 ~]#  source admin-openstack.sh
[root@controller1 ~]#   openstack user create --domain default --password=04aea9de5f79 cinder
添加 admin 角色到 cinder 用户上。
[root@controller1 ~]#  openstack role add --project service --user cinder admin
创建 cinder 和 cinderv2 服务实体:
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
注解
块设备存储服务要求两个服务实体。
创建块设备存储服务的 API 入口点:
openstack endpoint create --region RegionOne volumev2 public http://10.1.36.28:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://10.1.36.28:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://10.1.36.28:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://10.1.36.28:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://10.1.36.28:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://10.1.36.28:8776/v3/%\(project_id\)s
块设备存储服务每个服务实体都需要端点。
安全并配置组件
安装软件包:
[root@controller1 ~]#  yum install -y openstack-cinder
编辑 /etc/cinder/cinder.conf,同时完成如下动作:
在 [database] 部分,配置数据库访问:
[database]
...
在 “[DEFAULT]” 部分,配置 “RabbitMQ” 消息队列访问:
[DEFAULT]
...
用你在 “RabbitMQ” 中为 “openstack” 选择的密码替换 “RABBIT_PASS”。
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
www_authenticate_uri = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 04aea9de5f79
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
初始化块设备服务的数据库:
[root@node1 images]# grep -v "^#\|^$" /etc/cinder/cinder.conf
[DEFAULT]
glance_api_servers = http://10.1.36.28:9292
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
auth_strategy = keystone
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:04aea9de5f79@10.1.36.28/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000
auth_url = http://10.1.36.28:5000
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 04aea9de5f79
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[ceph]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
[root@node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
[root@node1 images]# mysql -ucinder -p04aea9de5f79 -e "use cinder;show tables;"
 
+----------------------------+
| Tables_in_cinder           |
+----------------------------+
| attachment_specs           |
| backup_metadata            |
| backups                    |
| cgsnapshots                |
| clusters                   |
| consistencygroups          |
| driver_initiator_data      |
| encryption                 |
| group_snapshots            |
| group_type_projects        |
| group_type_specs           |
| group_types                |
| group_volume_type_mapping  |
| groups                     |
| image_volume_cache_entries |
| messages                   |
| migrate_version            |
| quality_of_service_specs   |
| quota_classes              |
| quota_usages               |
| quotas                     |
| reservations               |
| services                   |
| snapshot_metadata          |
| snapshots                  |
| transfers                  |
| volume_admin_metadata      |
| volume_attachment          |
| volume_glance_metadata     |
| volume_metadata            |
| volume_type_extra_specs    |
| volume_type_projects       |
| volume_types               |
| volumes                    |
| workers                    |
+----------------------------+
配置计算节点以使用块设备存储
编辑文件 /etc/nova/nova.conf 并添加如下到其中:
[cinder]
catalog_info = volumev3:cinderv3:internalURL
os_region_name = RegionOne
完成安装
重启计算API 服务:
[root@controller1 ~]# systemctl restart openstack-nova-api.service
启动块设备存储服务,并将其配置为开机自启:
[root@controller1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
验证块设备存储服务的操作。
[root@controller1 ~]#source admin-openstack.sh
[root@controller1 ~]# openstack volume service list
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host             | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1      | nova | enabled | up    | 2020-05-16T08:06:17.000000 | -               |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
-----------------------------
 
ceph与cinder的结合
 
准备工作
 
安装前我们配置下yum源,这里使用的是较新的nautilus版本
[root@controller1 ~]#  cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
安装ceph-common
[root@controller1 ~]#  yum install ceph-common -y
 
[root@controller1 ~]# rpm -qa | grep ceph-common
ceph-common-14.2.9-0.el7.x86_64
 
提前在cinder节点的/etc/ceph/目录下放好ceph.conf和ceph.client.cinder.keyring这2个文件
 
[root@controller1 ~]# ls -lh /etc/ceph/
total 16K
-rw-r--r-- 1 glance glance   64 May 12 09:05 ceph.client.cinder.keyring
-rw-r----- 1 glance glance   64 May 12 09:03 ceph.client.glance.keyring
-rw-r--r-- 1 glance glance 1.5K May 12 13:45 ceph.conf
-rw-r--r-- 1 glance glance   92 Apr 10 01:28 rbdmap
 
# 后端使用ceph存储[DEFAULT]
enabled_backends = ceph
# 新增[ceph] section;
# 注意红色字体部分前后一致[ceph]
# ceph rbd驱动
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
volume_backend_name = ceph
 
# 如果配置多后端,则“glance_api_version”必须配置在[DEFAULT] section
[DEFAULT]
glance_api_version = 2
# 变更配置文件,重启服务
整体配置如下:
[root@controller1 ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
debug = True
use_forwarded_for = true
use_stderr = False
osapi_volume_workers = 5
volume_name_template = volume-%s
glance_api_servers = http://10.1.36.28:9292
glance_num_retries = 3
glance_api_version = 2
os_region_name = RegionOne
enabled_backends = ceph
api_paste_config = /etc/cinder/api-paste.ini
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
auth_strategy = keystone
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:04aea9de5f79@10.1.36.28/cinder
max_retries = -1
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://10.1.36.28:5000/v3
auth_url = http://10.1.36.28:5000/v3
memcached_servers =10.1.36.21:11211,10.1.36.22:11211,10.1.36.23:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 04aea9de5f79
[nova]
interface = internal
auth_url = http://10.1.36.28:5000
auth_type = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = 04aea9de5f79
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
transport_url = rabbit://openstack:04aea9de5f79@10.1.36.21:5672,openstack:04aea9de5f79@10.1.36.22:5672,openstack:04aea9de5f79@10.1.36.23:5672
driver = noop
[oslo_messaging_rabbit]
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 2b706e33-609e-4542-9cc5-1a01703a292f
volume_backend_name = ceph
report_discard_supported = True
image_upload_use_cinder_backend = True
[oslo_middleware]
enable_proxy_headers_parsing = True
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
cinder节点
[root@controller1 ~]# systemctl restart openstack-cinder-volume.service
重启controller的cinder服务
[root@controller1 ~]# systemctl restart  openstack-cinder-scheduler openstack-cinder-api
注:1.volume_driver = cinder.volume.drivers.rbd.RBDDriver r和/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py是对应的
查看服务状态:  
[root@controller1 ~]# cinder service-list
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host             | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1      | nova | enabled | up    | 2020-05-16T08:06:17.000000 | -               |
| cinder-volume    | controller1@ceph | nova | enabled | up    | 2020-05-16T08:06:18.000000 | -               |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
controller建立type
[root@controller1 ~]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| f1df2ecf-44ce-4174-8b8e-69e0177efd9e | ceph | -           | True      |
+--------------------------------------+------+-------------+-----------+
controller节点配置cinder-type和volume_backend_name联动
[root@controller1 ~]# cinder type-key ceph set volume_backend_name=ceph
#查看type的设置情况
[root@controller1 ~]# cinder extra-specs-list
+--------------------------------------+------+---------------------------------+
| ID                                   | Name | extra_specs                     |
+--------------------------------------+------+---------------------------------+
| f1df2ecf-44ce-4174-8b8e-69e0177efd9e | ceph | {'volume_backend_name': 'ceph'} |
+--------------------------------------+------+---------------------------------+
重启controller的cinder服务
[root@controller1 ~]# systemctl restart openstack-cinder-scheduler openstack-cinder-api  
 
创建一个卷进行测试
 
 
[root@ceph-host-01 ~]# rbd ls volumes
volume-a61b1b60-b55b-493d-ae21-6605ef8cfc35
 
关于cinder高可用,其实就是三个控制节点都部署了cinder服务而已。
[root@controller1 ~]# cinder service-list         
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host             | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller1      | nova | enabled | up    | 2020-05-18T08:14:49.000000 | -               |
| cinder-scheduler | controller2      | nova | enabled | up    | 2020-05-18T08:14:51.000000 | -               |
| cinder-scheduler | controller3      | nova | enabled | up    | 2020-05-18T08:14:55.000000 | -               |
| cinder-volume    | controller1@ceph | nova | enabled | up    | 2020-05-18T08:14:55.000000 | -               |
| cinder-volume    | controller2@ceph | nova | enabled | up    | 2020-05-18T08:14:51.000000 | -               |
| cinder-volume    | controller3@ceph | nova | enabled | up    | 2020-05-18T08:14:55.000000 | -               |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
 
从块设备启动
您可以使用Cinder命令行工具从图像创建卷:
cinder create --image-id {id of image} --display-name {name of volume} {size of volume}
您可以使用qemu-img从一种格式转换为另一种格式。例如:
qemu-img convert -f {source-format} -O {output-format} {source-filename} {output-filename}qemu-img convert -f qcow2 -O raw precise-cloudimg.img precise-cloudimg.raw
[root@controller1 ~]# qemu-img convert -f qcow2 -O raw new_centos7.4.qcow2 centos7.4.raw
[root@controller1 ~]# qemu-img info centos7.4.raw        
image: centos7.4.raw
file format: raw
virtual size: 30G (32212254720 bytes)
disk size: 1.1G
[root@controller1 ~]# source admin-openstack.sh
[root@controller1 ~]# openstack image create "CentOS 7.4 64位"  --file centos7.4.raw --disk-format raw --container-format bare --public
镜像较大,保存在ceph存储集群中还是要花点时间
[root@ceph-host-01 ~]# rbd ls images
73fbe706-fb02-428f-815d-8e97375767a3
9560cd59-868a-43ec-8231-351c09bdfe5a
9e22baf9-71da-49bb-8edf-be0cc09bc8c3
[root@ceph-host-01 ~]# # rbd info images/9e22baf9-71da-49bb-8edf-be0cc09bc8c3
rbd image '9e22baf9-71da-49bb-8edf-be0cc09bc8c3':
        size 30 GiB in 3840 objects
        order 23 (8 MiB objects)
        snapshot_count: 1
        id: 2880bbd7308b5
        block_name_prefix: rbd_data.2880bbd7308b5
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Thu May 21 11:02:24 2020
        access_timestamp: Thu May 21 11:02:24 2020
        modify_timestamp: Thu May 21 11:47:10 2020
 
结尾我们展示下实例的创建:
 
1.使用卷来启用实例(创建基于ceph存储的bootable存储卷)
 
挂接 Ceph RBD 卷给虚机的大致交互流程如下:
当Glance和Cinder都使用Ceph块设备时,该映像是写时复制克隆,因此它可以快速创建新卷。在OpenStack仪表板中,可以通过执行以下步骤从该卷启动:
1. 启动一个新实例。
2. 选择与写时复制克隆关联的映像。
3. 选择“从卷启动”。
4. 选择您创建的卷。
 
 
 
查看实例的xml配置文件
[root@compute2 ~]# virsh list --uuid
e76962a0-56cf-4b47-b3e7-9cb589d29e6d
 
 
[root@compute2 ~]# virsh dumpxml e76962a0-56cf-4b47-b3e7-9cb589d29e6d
<domain type='kvm' id='12'>
  <name>instance-00000083</name>
  <uuid>e76962a0-56cf-4b47-b3e7-9cb589d29e6d</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="19.1.0-1.el7"/>
      <nova:name>centos-vm1</nova:name>
      <nova:creationTime>2020-05-21 04:49:54</nova:creationTime>
      <nova:flavor name="2c2g">
        <nova:memory>2048</nova:memory>
        <nova:disk>40</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>2</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="efe2970c7ab74c67a4aced146cee3fb0">admin</nova:user>
        <nova:project uuid="f004bf0d5c874f2c978e441bddfa2724">admin</nova:project>
      </nova:owner>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <shares>2048</shares>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>RDO</entry>
      <entry name='product'>OpenStack Compute</entry>
      <entry name='version'>19.1.0-1.el7</entry>
      <entry name='serial'>e76962a0-56cf-4b47-b3e7-9cb589d29e6d</entry>
      <entry name='uuid'>e76962a0-56cf-4b47-b3e7-9cb589d29e6d</entry>
      <entry name='family'>Virtual Machine</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Nehalem-IBRS</model>
    <vendor>Intel</vendor>
    <topology sockets='2' cores='1' threads='1'/>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='x2apic'/>
    <feature policy='require' name='tsc-deadline'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='arat'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='stibp'/>
    <feature policy='require' name='ssbd'/>
    <feature policy='require' name='rdtscp'/>
  </cpu>
  <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='cinder'>
        <secret type='ceph' uuid='2b706e33-609e-4542-9cc5-1a01703a292f'/>
      </auth>
      <source protocol='rbd' name='volumes/volume-d4c71c06-b118-4a71-9076-074efc211f16'>
        <host name='10.1.36.11' port='6789'/>
        <host name='10.1.36.12' port='6789'/>
        <host name='10.1.36.13' port='6789'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <serial>d4c71c06-b118-4a71-9076-074efc211f16</serial>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <interface type='bridge'>
      <mac address='fa:16:3e:e8:9f:03'/>
      <source bridge='brq23348359-07'/>
      <target dev='tapc816a9fb-5a'/>
      <model type='virtio'/>
      <mtu size='1500'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='fa:16:3e:b5:0a:c4'/>
      <source bridge='brq4a974777-fd'/>
      <target dev='tap9459b9d8-e6'/>
      <model type='virtio'/>
      <mtu size='1450'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <log file='/var/lib/nova/instances/e76962a0-56cf-4b47-b3e7-9cb589d29e6d/console.log' append='off'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <log file='/var/lib/nova/instances/e76962a0-56cf-4b47-b3e7-9cb589d29e6d/console.log' append='off'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <stats period='10'/>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>
 
[root@ceph-host-01 ~]# rbd ls volumes
volume-d4c71c06-b118-4a71-9076-074efc211f16
[root@ceph-host-01 ~]# rbd info  volumes/volume-d4c71c06-b118-4a71-9076-074efc211f16
rbd image 'volume-d4c71c06-b118-4a71-9076-074efc211f16':
        size 30 GiB in 7680 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 29c8d319f2e27
        block_name_prefix: rbd_data.29c8d319f2e27
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Thu May 21 12:34:50 2020
        access_timestamp: Thu May 21 13:01:01 2020
        modify_timestamp: Thu May 21 13:02:42 2020
        parent: images/9e22baf9-71da-49bb-8edf-be0cc09bc8c3@snap
        overlap: 30 GiB
 
 
 
2.从ceph rbd启动虚拟机
# --nic:net-id指网络id,非subnet-id;
# 最后“centos-vm1”为instance名
[root@controller1 ~]# nova boot --flavor 2c2g  --image 'CentOS 7.4 64位' --availability-zone nova \
--nic net-id=23348359-077f-4133-b484-d9d6195f806a,v4-fixed-ip=192.168.99.122 \
--nic net-id=4a974777-fd29-4678-9e70-9545b4208943,v4-fixed-ip=192.168.100.122 \
--security-group default  centos-vm1
+--------------------------------------+--------------------------------------------------------+
| Property                             | Value                                                  |
+--------------------------------------+--------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                 |
| OS-EXT-AZ:availability_zone          | nova                                                   |
| OS-EXT-SRV-ATTR:host                 | -                                                      |
| OS-EXT-SRV-ATTR:hostname             | centos-vm1                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                      |
| OS-EXT-SRV-ATTR:instance_name        |                                                        |
| OS-EXT-SRV-ATTR:kernel_id            |                                                        |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                      |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                        |
| OS-EXT-SRV-ATTR:reservation_id       | r-jigp2tpl                                             |
| OS-EXT-SRV-ATTR:root_device_name     | -                                                      |
| OS-EXT-SRV-ATTR:user_data            | -                                                      |
| OS-EXT-STS:power_state               | 0                                                      |
| OS-EXT-STS:task_state                | scheduling                                             |
| OS-EXT-STS:vm_state                  | building                                               |
| OS-SRV-USG:launched_at               | -                                                      |
| OS-SRV-USG:terminated_at             | -                                                      |
| accessIPv4                           |                                                        |
| accessIPv6                           |                                                        |
| adminPass                            | WuJoYkD46mLY                                           |
| config_drive                         |                                                        |
| created                              | 2020-05-22T06:27:57Z                                   |
| description                          | -                                                      |
| flavor:disk                          | 40                                                     |
| flavor:ephemeral                     | 0                                                      |
| flavor:extra_specs                   | {}                                                     |
| flavor:original_name                 | 2c2g                                                   |
| flavor:ram                           | 2048                                                   |
| flavor:swap                          | 0                                                      |
| flavor:vcpus                         | 2                                                      |
| hostId                               |                                                        |
| host_status                          |                                                        |
| id                                   | 92b28257-b5b6-41a4-aebc-9726358d7015                   |
| image                                | CentOS 7.4 64位 (9e22baf9-71da-49bb-8edf-be0cc09bc8c3) |
| key_name                             | -                                                      |
| locked                               | False                                                  |
| metadata                             | {}                                                     |
| name                                 | centos-vm1                                             |
| os-extended-volumes:volumes_attached | []                                                     |
| progress                             | 0                                                      |
| security_groups                      | default                                                |
| server_groups                        | []                                                     |
| status                               | BUILD                                                  |
| tags                                 | []                                                     |
| tenant_id                            | f004bf0d5c874f2c978e441bddfa2724                       |
| trusted_image_certificates           | -                                                      |
| updated                              | 2020-05-22T06:27:57Z                                   |
| user_id                              | efe2970c7ab74c67a4aced146cee3fb0                       |
+--------------------------------------+--------------------------------------------------------+
 
# 查询生成的instance
[root@controller1 ~]# openstack server list
+--------------------------------------+------------+--------+-------------------------------------------------+-----------------+--------+
| ID                                   | Name       | Status | Networks                                        | Image           | Flavor |
+--------------------------------------+------------+--------+-------------------------------------------------+-----------------+--------+
| 92b28257-b5b6-41a4-aebc-9726358d7015 | centos-vm1 | ACTIVE | vlan99=192.168.99.122; vxlan100=192.168.100.122 | CentOS 7.4 64位 | 2c2g   |
+--------------------------------------+------------+--------+-------------------------------------------------+-----------------+--------+
 
# 查看生成的instance的详细信息
[root@controller1 ~]# openstack server show 92b28257-b5b6-41a4-aebc-9726358d7015
+-------------------------------------+----------------------------------------------------------+
| Field                               | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| OS-EXT-SRV-ATTR:host                | compute1                                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute1                                                 |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000095                                        |
| OS-EXT-STS:power_state              | Running                                                  |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-SRV-USG:launched_at              | 2020-05-22T06:28:51.000000                               |
| OS-SRV-USG:terminated_at            | None                                                     |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| addresses                           | vlan99=192.168.99.122; vxlan100=192.168.100.122          |
| config_drive                        |                                                          |
| created                             | 2020-05-22T06:27:57Z                                     |
| flavor                              | 2c2g (82cc2a11-7b19-4a10-a86e-2408253b70e2)              |
| hostId                              | 49c5f207c741862ee74ae91c1256ad6fe9de334c25195b0897b06150 |
| id                                  | 92b28257-b5b6-41a4-aebc-9726358d7015                     |
| image                               | CentOS 7.4 64位 (9e22baf9-71da-49bb-8edf-be0cc09bc8c3)   |
| key_name                            | None                                                     |
| name                                | centos-vm1                                               |
| progress                            | 0                                                        |
| project_id                          | f004bf0d5c874f2c978e441bddfa2724                         |
| properties                          |                                                          |
| security_groups                     | name='default'                                           |
|                                     | name='default'                                           |
| status                              | ACTIVE                                                   |
| updated                             | 2020-05-22T06:28:51Z                                     |
| user_id                             | efe2970c7ab74c67a4aced146cee3fb0                         |
| volumes_attached                    |                                                          |
+-------------------------------------+----------------------------------------------------------+
# 验证是否从ceph rbd启动
[root@ceph-host-01 ~]# rbd ls vms
92b28257-b5b6-41a4-aebc-9726358d7015_disk
3)对rbd启动的虚拟机进行live-migration
# 使用”openstack server show 92b28257-b5b6-41a4-aebc-9726358d7015”得知从rbd启动的instance在迁移前位于compute1节点;
# 或使用”nova hypervisor-servers compute1”进行验证;
[root@controller1 ~]# nova hypervisor-servers compute1
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| ID                                   | Name              | Hypervisor ID                        | Hypervisor Hostname |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| 92b28257-b5b6-41a4-aebc-9726358d7015 | instance-00000095 | 83801656-d148-40e7-b6fd-409993f5931d | compute1            |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
[root@controller1 ~]# nova hypervisor-servers compute2
+----+------+---------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+----+------+---------------+---------------------+
+----+------+---------------+---------------------+
 
[root@controller1 ~]# nova live-migration centos-vm1 compute2
# 迁移过程中可查看状态
[root@controller01 ~]# openstack server list
 
# 迁移完成后,查看instacn所在节点;
# 或使用”openstack server show 92b28257-b5b6-41a4-aebc-9726358d7015”命令查看”hypervisor_hostname”
[root@controller1 ~]# nova hypervisor-servers compute2       
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| ID                                   | Name              | Hypervisor ID                        | Hypervisor Hostname |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| 92b28257-b5b6-41a4-aebc-9726358d7015 | instance-00000095 | e433bd1a-13f6-42e9-a176-adb8250ec254 | compute2            |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
[root@controller1 ~]# nova hypervisor-servers compute1       
+----+------+---------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+----+------+---------------+---------------------+
+----+------+---------------+---------------------+
查看实例的xml配置文件
[root@compute2 ~]# virsh dumpxml instance-00000095
<domain type='kvm' id='1'>
  <name>instance-00000095</name>
  <uuid>92b28257-b5b6-41a4-aebc-9726358d7015</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="19.1.0-1.el7"/>
      <nova:name>centos-vm1</nova:name>
      <nova:creationTime>2020-05-22 06:28:49</nova:creationTime>
      <nova:flavor name="2c2g">
        <nova:memory>2048</nova:memory>
        <nova:disk>40</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>2</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="efe2970c7ab74c67a4aced146cee3fb0">admin</nova:user>
        <nova:project uuid="f004bf0d5c874f2c978e441bddfa2724">admin</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="9e22baf9-71da-49bb-8edf-be0cc09bc8c3"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <shares>2048</shares>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>RDO</entry>
      <entry name='product'>OpenStack Compute</entry>
      <entry name='version'>19.1.0-1.el7</entry>
      <entry name='serial'>92b28257-b5b6-41a4-aebc-9726358d7015</entry>
      <entry name='uuid'>92b28257-b5b6-41a4-aebc-9726358d7015</entry>
      <entry name='family'>Virtual Machine</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Nehalem-IBRS</model>
    <vendor>Intel</vendor>
    <topology sockets='2' cores='1' threads='1'/>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='x2apic'/>
    <feature policy='require' name='tsc-deadline'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='arat'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='stibp'/>
    <feature policy='require' name='ssbd'/>
    <feature policy='require' name='rdtscp'/>
  </cpu>
  <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='cinder'>
        <secret type='ceph' uuid='2b706e33-609e-4542-9cc5-1a01703a292f'/>
      </auth>
      <source protocol='rbd' name='vms/92b28257-b5b6-41a4-aebc-9726358d7015_disk'>
        <host name='10.1.36.11' port='6789'/>
        <host name='10.1.36.12' port='6789'/>
        <host name='10.1.36.13' port='6789'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <interface type='bridge'>
      <mac address='fa:16:3e:b8:80:be'/>
      <source bridge='brq23348359-07'/>
      <target dev='tap5d1d3450-68'/>
      <model type='virtio'/>
      <mtu size='1500'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='fa:16:3e:32:b7:3c'/>
      <source bridge='brq4a974777-fd'/>
      <target dev='tap43d4a7a0-f7'/>
      <model type='virtio'/>
      <mtu size='1450'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <log file='/var/lib/nova/instances/92b28257-b5b6-41a4-aebc-9726358d7015/console.log' append='off'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <log file='/var/lib/nova/instances/92b28257-b5b6-41a4-aebc-9726358d7015/console.log' append='off'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <stats period='10'/>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>
 
                  https://blog.51cto.com/127601/2434072
                  https://www.cnblogs.com/sammyliu/p/4804037.html(理解 OpenStack + Ceph (1):Ceph + OpenStack 集群部署和配置)
                  https://docs.ceph.com/docs/giant/rbd/rbd-openstack/(ceph官网关于ceph怎么结合openstack)
                  https://www.cnblogs.com/netonline/tag/openstack/ (高可用OpenStack(Queen版)集群)
 
 
 

openstack高可用集群21-生产环境高可用openstack集群部署记录的更多相关文章

  1. 13.生产环境中的 redis 是怎么部署的?

    作者:中华石杉 面试题 生产环境中的 redis 是怎么部署的? 面试官心理分析 看看你了解不了解你们公司的 redis 生产集群的部署架构,如果你不了解,那么确实你就很失职了,你的 redis 是主 ...

  2. Asp.Net Core 程序部署到Linux(centos)生产环境(一):普通部署

    运行环境 照例,先亮底 centos:7.2 cpu:1核 2G内存 1M带宽 辅助工具:xshell xftp 搭建.net core运行环境 .net core 的运行环境我单独写了一篇,请看我的 ...

  3. Asp.Net Core 程序部署到Linux(centos)生产环境(二):docker部署

    运行环境 照例,先亮环境:软件的话我这里假设你已经批准好了.net core 运行环境,未配置可以看我的这篇[linux(centos)搭建.net core 运行环境] 腾讯云 centos:7.2 ...

  4. 搭建Hadoop集群(生产环境)

    1.搭建之前:百度copy一下介绍 (本博客几乎全都是生产环境的配置..包括mongo等hbase其他) Hadoop是一个由Apache基金会所开发的分布式系统基础架构. 用户可以在不了解分布式底层 ...

  5. 生产环境中的 redis 是怎么部署的

    redis cluster,10 台机器,5 台机器部署了 redis 主实例,另外 5 台机器部署了 redis 的从实例,每个主实例挂了一个从实例,5 个节点对外提供读写服务,每个节点的读写高峰q ...

  6. 面试系列20 生产环境中的redis是怎么部署的

    redis cluster,10台机器,5台机器部署了redis主实例,另外5台机器部署了redis的从实例,每个主实例挂了一个从实例,5个节点对外提供读写服务,每个节点的读写高峰qps可能可以达到每 ...

  7. 生产环境中的redis是怎么部署的?

    redis cluster,10台机器,5台机器部署了redis主实例,另外5台机器部署了redis的从实例,每个主实例挂了一个从实例,5个节点对外提供读写服务,每个节点的读写高峰qps可能可以达到每 ...

  8. Ubuntu构建LVS+Keepalived高可用负载均衡集群【生产环境部署】

    1.环境说明: 系统版本:Ubuntu 14.04 LVS1物理IP:14.17.64.2   初始接管VIP:14.17.64.13 LVS2物理IP:14.17.64.3   初始接管VIP:14 ...

  9. 生产环境一键创建kafka集群

    前段时间公司的一个kafka集群出现了故障,由于之前准备不足,当时处理的比较慌乱.如:由于kafka的集群里topic数量较多,并且每个topic的分区数量和副本数量都不是一样的,如果按部就班的一个一 ...

随机推荐

  1. word-结构图

    公司单位上下级结构图 总经理 助理 副总经理 财务总监 财务部 人事部 行政部 出口部 进口部 运营总监 储运部 信息部 首先将内容按照上下级排序正确 插入-SmartArt-根据需要选择图形,以上内 ...

  2. PHP 获取本周、今日、本月的起始时间戳

    当前周的开始时间(周一)$begintime = mktime(0, 0, 0, date('m'), (date('d') - (date('w')>0 ? date('w') : 7) + ...

  3. python基础之操作列表

    遍历元素 magicians = ['alice','david','carolina'] for magician in magicians: print(magician) magicians = ...

  4. Powermockito 针对方法中new 对象的模拟,以及属性中new 对象的模拟

    PowerMocker 是一个功能逆天的mock 工具. 一,Powermockito 针对方法中new 对象的模拟 // 如何才能mock掉 WeChatConfigUtil 这个类,让 weCha ...

  5. Java蓝桥杯——排列组合

    排列组合介绍 排列,就是指从给定n个数的元素中取出指定m个数的元素,进行排序. 组合,则是指从给定n个数的元素中仅仅取出指定m个数的元素,不考虑排序. 全排列(permutation) 以数字为例,全 ...

  6. 啊这......蚂蚁金服被暂缓上市,员工的大house没了?

      没有想到,网友们前两天才对蚂蚁员工人均一套大 House羡慕嫉妒恨,这两天又因为蚂蚁金服被叫停惋惜.小编看了一下上一篇的时间,正好是11月3日晚上被叫停.太难了!   这中间出现了什么变故呢?原本 ...

  7. 测试中:ANR是什么

    1.ANR 的定义 ANR(Application Not Responding),用户可以选择"等待"而让程序继续运行,也可以选择"强制关闭".所以一个流畅的 ...

  8. 用PyCharm打个专业的招呼

    PyCharm 是什么 PyCharm(读作"拍恰姆")是 JetBrains 全家桶中的一员,专门用来写 Python 的: 官方网址是: https://www.jetbrai ...

  9. Unity全局调用非静态函数

    Unity全局调用非静态函数 情形 大概就是做游戏的时候想做一个给玩家展示信息的东西,比如玩家按了不该按的键提醒一下之类的.这个脚本倒是很简单,找个Text组件往上面加字就行了.问题在于这个脚本游戏中 ...

  10. argparse使用范例

    if __name__ == "__main__": # https://docs.python.org/zh-cn/dev/library/argparse.html impor ...