1. [Toc]
  2. https://github.com/wanstack/AutoMitaka # 亲情奉献安装openstack HA脚本 使用python + shell,完成了基本的核心功能(纯二层的)。欢迎Fork ,喜欢的请记得start一下。非常感谢。
  3. ---
  4. title: Openstack Mitaka 集群安装部署
  5. date: --- :
  6. tags: Openstack
  7. ---
  8. ==openstack运维开发群: 欢迎牛逼的你==
  9.  
  10. ### Openstack Mitaka HA 实施部署测试文档
  11.  
  12. #### 一、环境说明
  13.  
  14. ##### 、主机环境
  15.  
  16. ```
  17. controller(VIP) 192.168.10.100
  18. controller01 192.168.10.101, 10.0.0.1
  19. controller02 192.168.10.102, 10.0.0.2
  20. controller03 192.168.10.103, 10.0.0.3
  21. compute01 192.168.10.104, 10.0.0.4
  22. compute02 192.168.10.105, 10.0.0.5
  23. ```
  24. 本次环境仅限于测试环境,主要测试HA功能。具体生产环境请对网络进行划分。
  25.  
  26. #### 二、配置基础环境
  27.  
  28. ##### 、配置主机解析
  29.  
  30. ```
  31. 在对应节点上配置主机名:
  32.  
  33. hostnamectl set-hostname controller01
  34. hostname contoller01
  35.  
  36. hostnamectl set-hostname controller02
  37. hostname contoller02
  38.  
  39. hostnamectl set-hostname controller03
  40. hostname contoller03
  41.  
  42. hostnamectl set-hostname compute01
  43. hostname compute01
  44.  
  45. hostnamectl set-hostname compute02
  46. hostname compute02
  47. ```
  48.  
  49. ```
  50. 在controller01上配置主机解析:
  51.  
  52. [root@controller01 ~]# cat /etc/hosts
  53. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  54. :: localhost localhost.localdomain localhost6 localhost6.localdomain6
  55.  
  56. 192.168.10.100 controller
  57. 192.168.10.101 controller01
  58. 192.168.10.102 controller02
  59. 192.168.10.103 controller03
  60.  
  61. 192.168.10.104 compute01
  62. 192.168.10.105 compute02
  63. ```
  64.  
  65. ##### 、配置ssh互信
  66.  
  67. ```
  68. 在controller01上进行配置:
  69.  
  70. ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''
  71. ssh-copy-id -i .ssh/id_rsa.pub root@controller02
  72. ssh-copy-id -i .ssh/id_rsa.pub root@controller03
  73. ssh-copy-id -i .ssh/id_rsa.pub root@compute01
  74. ssh-copy-id -i .ssh/id_rsa.pub root@compute02
  75. ```
  76.  
  77. ```
  78. 拷贝主机名解析配置文件到其他节点
  79. scp /etc/hosts controller02:/etc/hosts
  80. scp /etc/hosts controller03:/etc/hosts
  81. scp /etc/hosts compute01:/etc/hosts
  82. scp /etc/hosts compute02:/etc/hosts
  83. ```
  84.  
  85. ##### 、yum 源配置
  86.  
  87. 本次测试机所有节点都可以正常连接网络,故使用阿里云的openstackbase
  88.  
  89. ```
  90. # 所有控制和计算节点开启yum缓存
  91. [root@controller01 ~]# cat /etc/yum.conf
  92. [main]
  93. cachedir=/var/cache/yum/$basearch/$releasever
  94. # 开启缓存keepcache=1表示开启缓存,keepcache=0表示不开启缓存,默认为0
  95. keepcache=
  96. debuglevel=
  97. logfile=/var/log/yum.log
  98. exactarch=
  99. obsoletes=
  100. gpgcheck=
  101. plugins=
  102. installonly_limit=
  103. bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
  104. distroverpkg=centos-release
  105.  
  106. # 基础源
  107. yum install wget -y
  108. rm -rf /etc/yum.repos.d/*
  109. wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  110.  
  111. # openstack mitaka源
  112. yum install centos-release-openstack-mitaka -y
  113. # 默认是centos源,建议修改成阿里云的,因为速度快
  114. [root@contoller01 yum.repos.d]# vim CentOS-OpenStack-mitaka.repo
  115. # CentOS-OpenStack-mitaka.repo
  116. #
  117. # Please see http://wiki.centos.org/SpecialInterestGroup/Cloud for more
  118. # information
  119.  
  120. [centos-openstack-mitaka]
  121. name=CentOS-7 - OpenStack mitaka
  122. baseurl=http://mirrors.aliyun.com//centos/7/cloud/$basearch/openstack-mitaka/
  123. gpgcheck=1
  124. enabled=1
  125. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
  126.  
  127. # galera源
  128. vim mariadb.repo
  129. [mariadb]
  130. name = MariaDB
  131. baseurl = http://yum.mariadb.org/10.1/centos7-amd64/
  132. enable=1
  133. gpgcheck=1
  134. gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
  135. ```
  136.  
  137. scp 到其他所有节点
  138.  
  139. ```
  140. scp CentOS-OpenStack-mitaka.repo controller02:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
  141. scp CentOS-OpenStack-mitaka.repo controller03:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
  142. scp CentOS-OpenStack-mitaka.repo compute01:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
  143. scp CentOS-OpenStack-mitaka.repo compute02:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
  144.  
  145. scp mariadb.repo controller02:/etc/yum.repos.d/mariadb.repo
  146. scp mariadb.repo controller03:/etc/yum.repos.d/mariadb.repo
  147. scp mariadb.repo compute01:/etc/yum.repos.d/mariadb.repo
  148. scp mariadb.repo compute02:/etc/yum.repos.d/mariadb.repo
  149. ```
  150.  
  151. ##### 4、ntp配置
  152. 本机环境已经有ntp服务器,故直接使用。如果没有ntp服务器建议使用controller作为ntp服务器
  153.  
  154. ```
  155. yum install ntpdate -y
  156. echo "*/ * * * * /usr/sbin/ntpdate 192.168.2.161 >/dev/null >&" >> /var/spool/cron/root
  157. /usr/sbin/ntpdate 192.168.2.161
  158. ```
  159.  
  160. ##### 、关闭防火墙和selinux
  161.  
  162. ```
  163. systemctl disable firewalld.service
  164. systemctl stop firewalld.service
  165. sed -i -e "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config
  166. sed -i -e "s#SELINUXTYPE=targeted#\#SELINUXTYPE=targeted#g" /etc/selinux/config
  167. setenforce
  168. systemctl stop NetworkManager
  169. systemctl disable NetworkManager
  170. ```
  171.  
  172. ##### 、安装配置pacemaker
  173.  
  174. ```
  175. # 所有控制节点安装如下软件
  176. yum install -y pcs pacemaker corosync fence-agents-all resource-agents
  177. 修改corosync配置文件
  178. [root@contoller01 ~]# cat /etc/corosync/corosync.conf
  179. totem {
  180. version:
  181. secauth: off
  182. cluster_name: openstack-cluster
  183. transport: udpu
  184. }
  185.  
  186. nodelist {
  187. node {
  188. ring0_addr: controller01
  189. nodeid:
  190. }
  191. node {
  192. ring0_addr: controller02
  193. nodeid:
  194. }
  195. node {
  196. ring0_addr: controller03
  197. nodeid:
  198. }
  199. }
  200.  
  201. quorum {
  202. provider: corosync_votequorum
  203. two_node:
  204. }
  205.  
  206. logging {
  207. to_syslog: yes
  208. }
  209. ```
  210.  
  211. ```
  212. # 把配置文件拷贝到其他控制节点
  213. scp /etc/corosync/corosync.conf controller02:/etc/corosync/corosync.conf
  214. scp /etc/corosync/corosync.conf controller03:/etc/corosync/corosync.conf
  215. ```
  216.  
  217. ```
  218. # 查看成员信息
  219. corosync-cmapctl runtime.totem.pg.mrp.srp.members
  220. ```
  221.  
  222. ```
  223.  
  224. # 所有控制节点启动服务
  225. systemctl enable pcsd
  226. systemctl start pcsd
  227.  
  228. # 所有控制节点设置hacluster用户的密码
  229. echo hacluster | passwd --stdin hacluster
  230.  
  231. # [controller01]设置到集群节点的认证
  232. pcs cluster auth controller01 controller02 controller03 -u hacluster -p hacluster --force
  233. # [controller01]创建并启动集群
  234. pcs cluster setup --force --name openstack-cluster controller01 controller02 controller03
  235. pcs cluster start --all
  236. # [controller01]设置集群属性
  237. pcs property set pe-warn-series-max= pe-input-series-max= pe-error-series-max= cluster-recheck-interval=5min
  238. # [controller01] 暂时禁用STONISH,否则资源无法启动
  239. pcs property set stonith-enabled=false
  240.  
  241. # [controller01] 忽略投票
  242. pcs property set no-quorum-policy=ignore
  243.  
  244. # [controller01]配置VIP资源,VIP可以在集群节点间浮动
  245. pcs resource create vip ocf:heartbeat:IPaddr2 params ip=192.168.10.100 cidr_netmask="" op monitor interval="30s"
  246. ```
  247.  
  248. ##### 、安装haproxy
  249.  
  250. ```
  251. # [所有控制节点] 安装软件
  252. yum install -y haproxy
  253.  
  254. # [所有控制节点] 修改/etc/rsyslog.d/haproxy.conf文件
  255. echo "\$ModLoad imudp" >> /etc/rsyslog.d/haproxy.conf;
  256. echo "\$UDPServerRun 514" >> /etc/rsyslog.d/haproxy.conf;
  257. echo "local3.* /var/log/haproxy.log" >> /etc/rsyslog.d/haproxy.conf;
  258. echo "&~" >> /etc/rsyslog.d/haproxy.conf;
  259.  
  260. # [所有控制节点] 修改/etc/sysconfig/rsyslog文件
  261. sed -i -e 's#SYSLOGD_OPTIONS=\"\"#SYSLOGD_OPTIONS=\"-c 2 -r -m 0\"#g' /etc/sysconfig/rsyslog
  262.  
  263. # [所有控制节点] 重启rsyslog服务
  264. systemctl restart rsyslog
  265.  
  266. # 创建haproxy基础配置
  267. vim /etc/haproxy/haproxy.cfg
  268. #---------------------------------------------------------------------
  269. # Example configuration for a possible web application. See the
  270. # full configuration options online.
  271. #
  272. # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
  273. #
  274. #---------------------------------------------------------------------
  275.  
  276. #---------------------------------------------------------------------
  277. # Global settings
  278. #---------------------------------------------------------------------
  279. global
  280. # to have these messages end up in /var/log/haproxy.log you will
  281. # need to:
  282. #
  283. # ) configure syslog to accept network log events. This is done
  284. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  285. # /etc/sysconfig/syslog
  286. #
  287. # ) configure local2 events to go to the /var/log/haproxy.log
  288. # file. A line like the following can be added to
  289. # /etc/sysconfig/syslog
  290. #
  291. # local2.* /var/log/haproxy.log
  292. #
  293. log 127.0.0.1 local3
  294. chroot /var/lib/haproxy
  295. daemon
  296. group haproxy
  297. maxconn
  298. pidfile /var/run/haproxy.pid
  299. user haproxy
  300.  
  301. #---------------------------------------------------------------------
  302. # common defaults that all the 'listen' and 'backend' sections will
  303. # use if not designated in their block
  304. #---------------------------------------------------------------------
  305. defaults
  306. log global
  307. maxconn
  308. option redispatch
  309. retries
  310. timeout http-request 10s
  311. timeout queue 1m
  312. timeout connect 10s
  313. timeout client 1m
  314. timeout server 1m
  315. timeout check 10s
  316.  
  317. include conf.d/*.cfg
  318. ```
  319.  
  320. ```
  321. # 拷贝到其他控制节点
  322. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
  323. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
  324. ```
  325.  
  326. ```
  327. # [controller01]在pacemaker集群增加haproxy资源
  328. pcs resource create haproxy systemd:haproxy --clone
  329. # Optional表示只在同时停止和/或启动两个资源时才会产生影响。对第一个指定资源进行的任何更改都不会对第二个指定的资源产生影响,定义在前面的资源先确保运行。
  330. pcs constraint order start vip then haproxy-clone kind=Optional
  331. # vip的资源决定了haproxy-clone资源的位置约束
  332. pcs constraint colocation add haproxy-clone with vip
  333. ping -c 3 192.168.10.100
  334. ```
  335.  
  336. ##### 8、galera安装配置
  337.  
  338. ```
  339. #所有控制节点上操作基本操作 :安装、设置配置文件
  340. yum install -y MariaDB-server xinetd
  341.  
  342. # 在所有控制节点进行配置
  343. vim /usr/lib/systemd/system/mariadb.service
  344. [Service]新添加两行如下参数:
  345. LimitNOFILE=10000
  346. LimitNPROC=10000
  347.  
  348. systemctl --system daemon-reload
  349. systemctl restart mariadb.service
  350.  
  351. # 初始化数据库,在controller01上执行即可
  352. systemctl start mariadb
  353. mysql_secure_installation
  354.  
  355. # 查看并发数
  356. show variables like 'max_connections';
  357.  
  358. # 关闭服务修改配置文件
  359. systemctl stop mariadb
  360.  
  361. # 备份原始配置文件
  362. cp /etc/my.cnf.d/server.cnf /etc/my.cnf.d/bak.server.cnf
  363.  
  364. ```
  365.  
  366. ```
  367. # 配置controller01上的配置文件
  368. cat /etc/my.cnf.d/server.cnf
  369. [mysqld]
  370. datadir=/var/lib/mysql
  371. socket=/var/lib/mysql/mysql.sock
  372. user=mysql
  373. binlog_format=ROW
  374. max_connections = 4096
  375. bind-address= 192.168.10.101
  376.  
  377. default_storage_engine=innodb
  378. innodb_autoinc_lock_mode=2
  379. innodb_flush_log_at_trx_commit=0
  380. innodb_buffer_pool_size=122M
  381.  
  382. wsrep_on=ON
  383. wsrep_provider=/usr/lib64/galera/libgalera_smm.so
  384. wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
  385. wsrep_cluster_name="galera_cluster"
  386. wsrep_cluster_address="gcomm://controller01,controller02,controller03"
  387. wsrep_node_name= controller01
  388. wsrep_node_address= 192.168.10.101
  389. wsrep_sst_method=rsync
  390. ```
  391.  
  392. ```
  393. # 配置controller02上的配置文件
  394. cat /etc/my.cnf.d/server.cnf
  395. [mysqld]
  396. datadir=/var/lib/mysql
  397. socket=/var/lib/mysql/mysql.sock
  398. user=mysql
  399. binlog_format=ROW
  400. max_connections = 4096
  401. bind-address= 192.168.10.102
  402.  
  403. default_storage_engine=innodb
  404. innodb_autoinc_lock_mode=2
  405. innodb_flush_log_at_trx_commit=0
  406. innodb_buffer_pool_size=122M
  407.  
  408. wsrep_on=ON
  409. wsrep_provider=/usr/lib64/galera/libgalera_smm.so
  410. wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
  411. wsrep_cluster_name="galera_cluster"
  412. wsrep_cluster_address="gcomm://controller01,controller02,controller03"
  413. wsrep_node_name= controller02
  414. wsrep_node_address= 192.168.10.102
  415. wsrep_sst_method=rsync
  416. ```
  417.  
  418. ```
  419. # 配置controller03上的配置文件
  420. cat /etc/my.cnf.d/server.cnf
  421. [mysqld]
  422. datadir=/var/lib/mysql
  423. socket=/var/lib/mysql/mysql.sock
  424. user=mysql
  425. binlog_format=ROW
  426. max_connections = 4096
  427. bind-address= 192.168.10.103
  428.  
  429. default_storage_engine=innodb
  430. innodb_autoinc_lock_mode=2
  431. innodb_flush_log_at_trx_commit=0
  432. innodb_buffer_pool_size=122M
  433.  
  434. wsrep_on=ON
  435. wsrep_provider=/usr/lib64/galera/libgalera_smm.so
  436. wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
  437. wsrep_cluster_name="galera_cluster"
  438. wsrep_cluster_address="gcomm://controller01,controller02,controller03"
  439. wsrep_node_name= controller03
  440. wsrep_node_address= 192.168.10.103
  441. wsrep_sst_method=rsync
  442. ```
  443.  
  444. ```
  445. # 在controller01上执行
  446. galera_new_cluster
  447.  
  448. #查看日志
  449. tail -f /var/log/messages
  450.  
  451. # 启动其他控制节点
  452. systemctl enable mariadb
  453. systemctl start mariadb
  454. ```
  455.  
  456. ```
  457. # 添加check
  458. mysql -uroot -popenstack -e "use mysql;INSERT INTO user(Host, User) VALUES('192.168.10.100', 'haproxy_check');FLUSH PRIVILEGES;"
  459. mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller01' IDENTIFIED BY '"openstack"'";
  460. mysql -uroot -popenstack -h 192.168.10.100 -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
  461. ```
  462.  
  463. ```
  464. # 配置haproxy for galera
  465. # 所有控制节点创建haproxy配置文件目录
  466.  
  467. cat /etc/haproxy/haproxy.cfg
  468. listen galera_cluster
  469. bind 192.168.10.100:3306
  470. balance source
  471. #option mysql-check user haproxy_check
  472. server controller01 192.168.10.101:3306 check port 9200 inter 2000 rise 2 fall 5
  473. server controller02 192.168.10.102:3306 check port 9200 inter 2000 rise 2 fall 5
  474. server controller03 192.168.10.103:3306 check port 9200 inter 2000 rise 2 fall 5
  475.  
  476. # 拷贝配置文件到其他控制节点
  477. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/
  478. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/
  479. ```
  480.  
  481. ```
  482. # 重启pacemaker,corosync集群脚本
  483. vim restart-pcs-cluster.sh
  484. #!/bin/sh
  485. pcs cluster stop --all
  486. sleep 10
  487. #ps aux|grep "pcs cluster stop --all"|grep -v grep|awk '{print $2 }'|xargs kill
  488. for i in 01 02 03; do ssh controller$i pcs cluster kill; done
  489. pcs cluster stop --all
  490. pcs cluster start --all
  491. sleep 5
  492. watch -n 0.5 pcs resource
  493. echo "pcs resource"
  494. pcs resource
  495. pcs resource|grep Stop
  496. pcs resource|grep FAILED
  497.  
  498. # 执行脚本
  499. bash restart-pcs-cluster.sh
  500. ```
  501.  
  502. ##### 9、安装配置rabbitmq-server集群
  503.  
  504. ```
  505. # 所有控制节点
  506. yum install -y rabbitmq-server
  507.  
  508. # 拷贝controller01上的cookie到其他控制节点上
  509. scp /var/lib/rabbitmq/.erlang.cookie root@controller02:/var/lib/rabbitmq/.erlang.cookie
  510. scp /var/lib/rabbitmq/.erlang.cookie root@controller03:/var/lib/rabbitmq/.erlang.cookie
  511.  
  512. # controller01以外的其他节点设置权限
  513. chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
  514. chmod 400 /var/lib/rabbitmq/.erlang.cookie
  515.  
  516. # 启动服务
  517. systemctl enable rabbitmq-server.service
  518. systemctl start rabbitmq-server.service
  519.  
  520. # 在任意控制节点上查看集群配置
  521. rabbitmqctl cluster_status
  522.  
  523. # controller01以外的其他节点 加入集群
  524. rabbitmqctl stop_app
  525. rabbitmqctl join_cluster --ram rabbit@controller01
  526. rabbitmqctl start_app
  527.  
  528. # 在任意节点 设置ha-mode
  529. rabbitmqctl cluster_status;
  530. rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'
  531.  
  532. # 在任意节点执行创建用户
  533. rabbitmqctl add_user openstack openstack
  534. rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  535. ```
  536.  
  537. ##### 10、安装配置memcache
  538.  
  539. ```
  540. yum install -y memcached
  541.  
  542. # controller01上修改配置
  543. cat /etc/sysconfig/memcached
  544. PORT="11211"
  545. USER="memcached"
  546. MAXCONN="1024"
  547. CACHESIZE="64"
  548. OPTIONS="-l 192.168.10.101,::1"
  549.  
  550. # controller02上修改配置
  551. cat /etc/sysconfig/memcached
  552. PORT="11211"
  553. USER="memcached"
  554. MAXCONN="1024"
  555. CACHESIZE="64"
  556. OPTIONS="-l 192.168.10.102,::1"
  557.  
  558. # controller03上修改配置
  559. cat /etc/sysconfig/memcached
  560. PORT="11211"
  561. USER="memcached"
  562. MAXCONN="1024"
  563. CACHESIZE="64"
  564. OPTIONS="-l 192.168.10.103,::1"
  565.  
  566. # 所有节点启动服务
  567. systemctl enable memcached.service
  568. systemctl start memcached.service
  569. ```
  570.  
  571. #### 三、安装配置openstack软件集群
  572.  
  573. ```
  574. # 所有控制节点和计算节点安装openstack 基础包
  575. yum upgrade -y
  576. yum install -y python-openstackclient openstack-selinux openstack-utils
  577. ```
  578.  
  579. ##### 1、安装openstack Identity
  580.  
  581. ```
  582.  
  583. # 在任意节点创建keystone数据库
  584. mysql -uroot -popenstack -e "CREATE DATABASE keystone;
  585. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '"keystone"';
  586. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '"keystone"';
  587. FLUSH PRIVILEGES;"
  588.  
  589. # 所有节点安装keystone软件包
  590. yum install -y openstack-keystone httpd mod_wsgi
  591.  
  592. # 任意节点生成临时token
  593. openssl rand -hex 10
  594. 8464d030a1f7ac3f7207
  595.  
  596. # 修改keystone配置文件
  597. openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token 8464d030a1f7ac3f7207
  598. openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@controller/keystone
  599. #openstack-config --set /etc/keystone/keystone.conf token provider fernet
  600.  
  601. openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
  602. openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true
  603. openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_interval 1
  604. openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_backoff 2
  605. openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_max_retries 0
  606. openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_durable_queues true
  607.  
  608. # 拷贝配置文件到其他控制节点
  609. scp /etc/keystone/keystone.conf controller02:/etc/keystone/keystone.conf
  610. scp /etc/keystone/keystone.conf controller03:/etc/keystone/keystone.conf
  611.  
  612. sed -i -e 's#\#ServerName www.example.com:80#ServerName '"controller01"'#g' /etc/httpd/conf/httpd.conf
  613. sed -i -e 's#\#ServerName www.example.com:80#ServerName '"controller02"'#g' /etc/httpd/conf/httpd.conf
  614. sed -i -e 's#\#ServerName www.example.com:80#ServerName '"controller03"'#g' /etc/httpd/conf/httpd.conf
  615.  
  616. # controller01上的配置
  617. vim /etc/httpd/conf.d/wsgi-keystone.conf
  618. Listen 192.168.10.101:5000
  619. Listen 192.168.10.101:35357
  620. <VirtualHost 192.168.10.101:5000>
  621. WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  622. WSGIProcessGroup keystone-public
  623. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  624. WSGIApplicationGroup %{GLOBAL}
  625. WSGIPassAuthorization On
  626. ErrorLogFormat "%{cu}t %M"
  627. ErrorLog /var/log/httpd/keystone-error.log
  628. CustomLog /var/log/httpd/keystone-access.log combined
  629.  
  630. <Directory /usr/bin>
  631. Require all granted
  632. </Directory>
  633. </VirtualHost>
  634.  
  635. <VirtualHost 192.168.10.101:35357>
  636. WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  637. WSGIProcessGroup keystone-admin
  638. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
  639. WSGIApplicationGroup %{GLOBAL}
  640. WSGIPassAuthorization On
  641. ErrorLogFormat "%{cu}t %M"
  642. ErrorLog /var/log/httpd/keystone-error.log
  643. CustomLog /var/log/httpd/keystone-access.log combined
  644.  
  645. <Directory /usr/bin>
  646. Require all granted
  647. </Directory>
  648. </VirtualHost>
  649.  
  650. # controller02上的配置
  651. vim /etc/httpd/conf.d/wsgi-keystone.conf
  652. Listen 192.168.10.102:5000
  653. Listen 192.168.10.102:35357
  654. <VirtualHost 192.168.10.102:5000>
  655. WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  656. WSGIProcessGroup keystone-public
  657. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  658. WSGIApplicationGroup %{GLOBAL}
  659. WSGIPassAuthorization On
  660. ErrorLogFormat "%{cu}t %M"
  661. ErrorLog /var/log/httpd/keystone-error.log
  662. CustomLog /var/log/httpd/keystone-access.log combined
  663.  
  664. <Directory /usr/bin>
  665. Require all granted
  666. </Directory>
  667. </VirtualHost>
  668.  
  669. <VirtualHost 192.168.10.102:35357>
  670. WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  671. WSGIProcessGroup keystone-admin
  672. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
  673. WSGIApplicationGroup %{GLOBAL}
  674. WSGIPassAuthorization On
  675. ErrorLogFormat "%{cu}t %M"
  676. ErrorLog /var/log/httpd/keystone-error.log
  677. CustomLog /var/log/httpd/keystone-access.log combined
  678.  
  679. <Directory /usr/bin>
  680. Require all granted
  681. </Directory>
  682. </VirtualHost>
  683.  
  684. # controller03上的配置
  685. vim /etc/httpd/conf.d/wsgi-keystone.conf
  686. Listen 192.168.10.103:5000
  687. Listen 192.168.10.103:35357
  688. <VirtualHost 192.168.10.103:5000>
  689. WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  690. WSGIProcessGroup keystone-public
  691. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  692. WSGIApplicationGroup %{GLOBAL}
  693. WSGIPassAuthorization On
  694. ErrorLogFormat "%{cu}t %M"
  695. ErrorLog /var/log/httpd/keystone-error.log
  696. CustomLog /var/log/httpd/keystone-access.log combined
  697.  
  698. <Directory /usr/bin>
  699. Require all granted
  700. </Directory>
  701. </VirtualHost>
  702.  
  703. <VirtualHost 192.168.10.103:35357>
  704. WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  705. WSGIProcessGroup keystone-admin
  706. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
  707. WSGIApplicationGroup %{GLOBAL}
  708. WSGIPassAuthorization On
  709. ErrorLogFormat "%{cu}t %M"
  710. ErrorLog /var/log/httpd/keystone-error.log
  711. CustomLog /var/log/httpd/keystone-access.log combined
  712.  
  713. <Directory /usr/bin>
  714. Require all granted
  715. </Directory>
  716. </VirtualHost>
  717.  
  718. # 添加haproxy配置
  719. vim /etc/haproxy/haproxy.cfg
  720. listen keystone_admin_cluster
  721. bind 192.168.10.100:35357
  722. balance source
  723. option tcpka
  724. option httpchk
  725. option tcplog
  726. server controller01 192.168.10.101:35357 check inter 2000 rise 2 fall 5
  727. server controller02 192.168.10.102:35357 check inter 2000 rise 2 fall 5
  728. server controller03 192.168.10.103:35357 check inter 2000 rise 2 fall 5
  729. listen keystone_public_internal_cluster
  730. bind 192.168.10.100:5000
  731. balance source
  732. option tcpka
  733. option httpchk
  734. option tcplog
  735. server controller01 192.168.10.101:5000 check inter 2000 rise 2 fall 5
  736. server controller02 192.168.10.102:5000 check inter 2000 rise 2 fall 5
  737. server controller03 192.168.10.103:5000 check inter 2000 rise 2 fall 5
  738.  
  739. # 把haproxy配置拷贝到其他控制节点
  740. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
  741. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
  742.  
  743. # [任一节点]生成数据库
  744. su -s /bin/sh -c "keystone-manage db_sync" keystone
  745.  
  746. # [任一节点/controller01]初始化Fernet key,并共享给其他节点
  747. #keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  748.  
  749. # 在其他控制节点
  750. #mkdir -p /etc/keystone/fernet-keys/
  751.  
  752. # 在controller01上
  753. #scp /etc/keystone/fernet-keys/* root@controller02:/etc/keystone/fernet-keys/
  754. #scp /etc/keystone/fernet-keys/* root@controller03:/etc/keystone/fernet-keys/
  755.  
  756. # 在其他控制节点
  757. chown keystone:keystone /etc/keystone/fernet-keys/*
  758.  
  759. # [任一节点]添加pacemaker资源,openstack资源和haproxy资源无关,可以开启A/A模式
  760. # interleave=true副本交错启动/停止,改变master/clone间的order约束,每个实例像其他克隆实例一样可快速启动/停止,无需等待其他克隆实例。
  761. # interleave这个设置为false的时候,constraint的order顺序的受到其他节点的影响,为true不受其他节点影响
  762. pcs resource create openstack-keystone systemd:httpd --clone interleave=true
  763. bash restart-pcs-cluster.sh
  764.  
  765. # 在任意节点创建临时token
  766. export OS_TOKEN=8464d030a1f7ac3f7207
  767. export OS_URL=http://controller:35357/v3
  768. export OS_IDENTITY_API_VERSION=3
  769.  
  770. # [任一节点]service entity and API endpoints
  771. openstack service create --name keystone --description "OpenStack Identity" identity
  772.  
  773. openstack endpoint create --region RegionOne identity public http://controller:5000/v3
  774. openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
  775. openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
  776.  
  777. # [任一节点]创建项目和用户
  778. openstack domain create --description "Default Domain" default
  779. openstack project create --domain default --description "Admin Project" admin
  780. openstack user create --domain default --password admin admin
  781. openstack role create admin
  782. openstack role add --project admin --user admin admin
  783.  
  784. ### [任一节点]创建service项目
  785. openstack project create --domain default --description "Service Project" service
  786.  
  787. # 在任意节点创建demo项目和用户
  788. openstack project create --domain default --description "Demo Project" demo
  789. openstack user create --domain default --password demo demo
  790. openstack role create user
  791. openstack role add --project demo --user demo user
  792.  
  793. # 生成keystonerc_admin脚本
  794. echo "export OS_PROJECT_DOMAIN_NAME=default
  795. export OS_USER_DOMAIN_NAME=default
  796. export OS_PROJECT_NAME=admin
  797. export OS_USERNAME=admin
  798. export OS_PASSWORD=admin
  799. export OS_AUTH_URL=http://controller:35357/v3
  800. export OS_IDENTITY_API_VERSION=3
  801. export OS_IMAGE_API_VERSION=2
  802. export PS1='[\u@\h \W(keystone_admin)]\$ '
  803. ">/root/keystonerc_admin
  804. chmod +x /root/keystonerc_admin
  805.  
  806. # 生成keystonerc_demo脚本
  807. echo "export OS_PROJECT_DOMAIN_NAME=default
  808. export OS_USER_DOMAIN_NAME=default
  809. export OS_PROJECT_NAME=demo
  810. export OS_USERNAME=demo
  811. export OS_PASSWORD=demo
  812. export OS_AUTH_URL=http://controller:5000/v3
  813. export OS_IDENTITY_API_VERSION=3
  814. export OS_IMAGE_API_VERSION=2
  815. export PS1='[\u@\h \W(keystone_admin)]\$ '
  816. ">/root/keystonerc_demo
  817. chmod +x /root/keystonerc_demo
  818.  
  819. source keystonerc_admin
  820. ### check
  821. openstack token issue
  822.  
  823. source keystonerc_demo
  824. ### check
  825. openstack token issue
  826. ```
  827.  
  828. ##### 2、安装openstack Image集群
  829.  
  830. ```
  831. # [任一节点]创建数据库
  832. mysql -uroot -popenstack -e "CREATE DATABASE glance;
  833. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '"glance"';
  834. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '"glance"';
  835. FLUSH PRIVILEGES;"
  836.  
  837. # [任一节点]创建用户等
  838. source keystonerc_admin
  839. openstack user create --domain default --password glance glance
  840. openstack role add --project service --user glance admin
  841. openstack service create --name glance --description "OpenStack Image" image
  842. openstack endpoint create --region RegionOne image public http://controller:9292
  843. openstack endpoint create --region RegionOne image internal http://controller:9292
  844. openstack endpoint create --region RegionOne image admin http://controller:9292
  845.  
  846. # 所有控制节点安装glance软件包
  847. yum install -y openstack-glance
  848.  
  849. # [所有控制节点]配置/etc/glance/glance-api.conf文件
  850. openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:glance@controller/glance
  851.  
  852. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
  853. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
  854. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
  855. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
  856. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
  857. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
  858. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
  859. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
  860. openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance
  861.  
  862. openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
  863.  
  864. openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
  865. openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
  866. openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
  867.  
  868. openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host controller
  869. openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host controller01
  870.  
  871. # [所有控制节点]配置/etc/glance/glance-registry.conf文件
  872. openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:glance@controller/glance
  873.  
  874. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
  875. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
  876. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
  877. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
  878. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
  879. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
  880. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
  881. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
  882. openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance
  883.  
  884. openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
  885.  
  886. openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
  887. openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues true
  888. openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_interval 1
  889. openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_backoff 2
  890. openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_max_retries 0
  891. openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_durable_queues true
  892.  
  893. openstack-config --set /etc/glance/glance-registry.conf DEFAULT registry_host controller
  894. openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host controller01
  895.  
  896. scp /etc/glance/glance-api.conf controller02:/etc/glance/glance-api.conf
  897. scp /etc/glance/glance-api.conf controller03:/etc/glance/glance-api.conf
  898. # 修改bind_host为对应的controller02,controller03
  899.  
  900. scp /etc/glance/glance-registry.conf controller02:/etc/glance/glance-registry.conf
  901. scp /etc/glance/glance-registry.conf controller03:/etc/glance/glance-registry.conf
  902. # 修改bind_host为对应的controller02,controller03
  903.  
  904. vim /etc/haproxy/haproxy.cfg
  905. # 增加如下配置
  906. listen glance_api_cluster
  907. bind 192.168.10.100:9292
  908. balance source
  909. option tcpka
  910. option httpchk
  911. option tcplog
  912. server controller01 192.168.10.101:9292 check inter 2000 rise 2 fall 5
  913. server controller02 192.168.10.102:9292 check inter 2000 rise 2 fall 5
  914. server controller03 192.168.10.103:9292 check inter 2000 rise 2 fall 5
  915. listen glance_registry_cluster
  916. bind 192.168.10.100:9191
  917. balance source
  918. option tcpka
  919. option tcplog
  920. server controller01 192.168.10.101:9191 check inter 2000 rise 2 fall 5
  921. server controller02 192.168.10.102:9191 check inter 2000 rise 2 fall 5
  922. server controller03 192.168.10.103:9191 check inter 2000 rise 2 fall 5
  923.  
  924. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
  925. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
  926.  
  927. # [任一节点]生成数据库
  928. su -s /bin/sh -c "glance-manage db_sync" glance
  929.  
  930. # [任一节点]添加pacemaker资源
  931. pcs resource create openstack-glance-registry systemd:openstack-glance-registry --clone interleave=true
  932. pcs resource create openstack-glance-api systemd:openstack-glance-api --clone interleave=true
  933. # 下面2条表示先启动openstack-keystone-clone然后启动openstack-glance-registry-clone然后启动openstack-glance-api-clone
  934. pcs constraint order start openstack-keystone-clone then openstack-glance-registry-clone
  935. pcs constraint order start openstack-glance-registry-clone then openstack-glance-api-clone
  936. # api随着registry启动而启动,如果registry启动不了,则api也启动不了
  937. pcs constraint colocation add openstack-glance-api-clone with openstack-glance-registry-clone
  938.  
  939. # 在任意节点重启pacemaker
  940. bash restart-pcs-cluster.sh
  941.  
  942. # 上传测试镜像
  943. openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
  944. openstack image list
  945. ```
  946.  
  947. ##### 3、安装openstack Compute集群(控制节点)
  948.  
  949. ```
  950. # 所有控制节点安装软件包
  951. yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
  952.  
  953. # [任一节点]创建数据库
  954. mysql -uroot -popenstack -e "CREATE DATABASE nova;
  955. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '"nova"';
  956. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '"nova"';
  957. CREATE DATABASE nova_api;
  958. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '"nova"';
  959. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '"nova"';
  960. FLUSH PRIVILEGES;"
  961.  
  962. # [任一节点]创建用户等
  963. source keystonerc_admin
  964. openstack user create --domain default --password nova nova
  965. openstack role add --project service --user nova admin
  966. openstack service create --name nova --description "OpenStack Compute" compute
  967. openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
  968. openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
  969. openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
  970.  
  971. # [所有控制节点]配置配置nova组件,/etc/nova/nova.conf文件
  972. openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
  973. # openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:11211,controller02:11211,controller03:11211
  974.  
  975. openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:nova@controller/nova_api
  976. openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:nova@controller/nova
  977.  
  978. openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
  979. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
  980. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true
  981. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1
  982. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2
  983. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0
  984. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true
  985. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
  986. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password openstack
  987.  
  988. openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
  989. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
  990. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
  991. openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
  992. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
  993. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
  994. openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
  995. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
  996. openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
  997. openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
  998.  
  999. openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.101
  1000. openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
  1001. openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
  1002. openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.101
  1003. openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.101
  1004. openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.101
  1005. openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
  1006. openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
  1007. openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.101
  1008. openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.101
  1009.  
  1010. scp /etc/nova/nova.conf controller02:/etc/nova/nova.conf
  1011. scp /etc/nova/nova.conf controller03:/etc/nova/nova.conf
  1012. # 其他节点修改my_ip,vncserver_listen,vncserver_proxyclient_address,osapi_compute_listen,metadata_listen,vnc novncproxy_host
  1013. openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.102
  1014. openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.102
  1015. openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.102
  1016. openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.102
  1017. openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.102
  1018. openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.102
  1019.  
  1020. ################################
  1021. openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.103
  1022. openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.103
  1023. openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.103
  1024. openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.103
  1025. openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.103
  1026. openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.103
  1027. ##################################
  1028. # 配置haproxy
  1029. vim /etc/haproxy/haproxy.cfg
  1030. listen nova_compute_api_cluster
  1031. bind 192.168.10.100:8774
  1032. balance source
  1033. option tcpka
  1034. option httpchk
  1035. option tcplog
  1036.  
  1037. server controller01 192.168.10.101:8774 check inter 2000 rise 2 fall 5
  1038. server controller02 192.168.10.102:8774 check inter 2000 rise 2 fall 5
  1039. server controller03 192.168.10.103:8774 check inter 2000 rise 2 fall 5
  1040. listen nova_metadata_api_cluster
  1041. bind 192.168.10.100:8775
  1042. balance source
  1043. option tcpka
  1044. option tcplog
  1045. server controller01 192.168.10.101:8775 check inter 2000 rise 2 fall 5
  1046. server controller02 192.168.10.102:8775 check inter 2000 rise 2 fall 5
  1047. server controller03 192.168.10.103:8775 check inter 2000 rise 2 fall 5
  1048. listen nova_vncproxy_cluster
  1049. bind 192.168.10.100:6080
  1050. balance source
  1051. option tcpka
  1052. option tcplog
  1053. server controller01 192.168.10.101:6080 check inter 2000 rise 2 fall 5
  1054. server controller02 192.168.10.102:6080 check inter 2000 rise 2 fall 5
  1055. server controller03 192.168.10.103:6080 check inter 2000 rise 2 fall 5
  1056.  
  1057. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
  1058. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
  1059.  
  1060. # [任一节点]生成数据库
  1061. su -s /bin/sh -c "nova-manage api_db sync" nova
  1062. su -s /bin/sh -c "nova-manage db sync" nova
  1063.  
  1064. # [任一节点]添加pacemaker资源
  1065. pcs resource create openstack-nova-consoleauth systemd:openstack-nova-consoleauth --clone interleave=true
  1066. pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy --clone interleave=true
  1067. pcs resource create openstack-nova-api systemd:openstack-nova-api --clone interleave=true
  1068. pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler --clone interleave=true
  1069. pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor --clone interleave=true
  1070. # 下面几个order属性表示先启动 openstack-keystone-clone 然后启动openstack-nova-consoleauth-clone
  1071. # 然后启动openstack-nova-novncproxy-clone,然后启动openstack-nova-api-clone,然后启动openstack-nova-scheduler-clone
  1072. # 然后启动openstack-nova-conductor-clone
  1073. # 下面几个colocation属性表示consoleauth约束着novncproxy资源的位置,也就是说consoleauth停止,则novncproxy停止。
  1074. # 下面的几个colocation属性依次类推
  1075. pcs constraint order start openstack-keystone-clone then openstack-nova-consoleauth-clone
  1076.  
  1077. pcs constraint order start openstack-nova-consoleauth-clone then openstack-nova-novncproxy-clone
  1078. pcs constraint colocation add openstack-nova-novncproxy-clone with openstack-nova-consoleauth-clone
  1079.  
  1080. pcs constraint order start openstack-nova-novncproxy-clone then openstack-nova-api-clone
  1081. pcs constraint colocation add openstack-nova-api-clone with openstack-nova-novncproxy-clone
  1082.  
  1083. pcs constraint order start openstack-nova-api-clone then openstack-nova-scheduler-clone
  1084. pcs constraint colocation add openstack-nova-scheduler-clone with openstack-nova-api-clone
  1085.  
  1086. pcs constraint order start openstack-nova-scheduler-clone then openstack-nova-conductor-clone
  1087. pcs constraint colocation add openstack-nova-conductor-clone with openstack-nova-scheduler-clone
  1088.  
  1089. bash restart-pcs-cluster.sh
  1090.  
  1091. ### [任一节点]测试
  1092. source keystonerc_admin
  1093. openstack compute service list
  1094. ```
  1095.  
  1096. ##### 4、安装配置neutron集群(控制节点)
  1097.  
  1098. ```
  1099. # [任一节点]创建数据库
  1100. mysql -uroot -popenstack -e "CREATE DATABASE neutron;
  1101. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '"neutron"';
  1102. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '"neutron"';
  1103. FLUSH PRIVILEGES;"
  1104.  
  1105. # [任一节点]创建用户等
  1106. source /root/keystonerc_admin
  1107. openstack user create --domain default --password neutron neutron
  1108. openstack role add --project service --user neutron admin
  1109. openstack service create --name neutron --description "OpenStack Networking" network
  1110. openstack endpoint create --region RegionOne network public http://controller:9696
  1111. openstack endpoint create --region RegionOne network internal http://controller:9696
  1112. openstack endpoint create --region RegionOne network admin http://controller:9696
  1113.  
  1114. # 所有控制节点
  1115. yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ebtables
  1116.  
  1117. # [所有控制节点]配置neutron server,/etc/neutron/neutron.conf
  1118. openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 192.168.10.101
  1119. openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:neutron@controller/neutron
  1120. openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
  1121. openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
  1122. openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
  1123.  
  1124. openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
  1125. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
  1126. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
  1127. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1
  1128. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2
  1129. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0
  1130. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true
  1131. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
  1132. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password openstack
  1133.  
  1134. openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
  1135. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
  1136. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
  1137. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
  1138. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
  1139. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
  1140. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
  1141. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
  1142. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
  1143. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
  1144.  
  1145. openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
  1146. openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
  1147. openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
  1148. openstack-config --set /etc/neutron/neutron.conf nova auth_type password
  1149. openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
  1150. openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
  1151. openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
  1152. openstack-config --set /etc/neutron/neutron.conf nova project_name service
  1153. openstack-config --set /etc/neutron/neutron.conf nova username nova
  1154. openstack-config --set /etc/neutron/neutron.conf nova password nova
  1155.  
  1156. openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
  1157.  
  1158. # [所有控制节点]配置ML2 plugin,/etc/neutron/plugins/ml2/ml2_conf.ini
  1159. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan,gre
  1160. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
  1161. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population
  1162. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
  1163.  
  1164. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external
  1165.  
  1166. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
  1167. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges external:1:4090
  1168. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
  1169. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
  1170. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver iptables_hybrid
  1171.  
  1172. # [所有控制节点]配置Open vSwitch agent,/etc/neutron/plugins/ml2/openvswitch_agent.ini,注意,此处填写第二块网卡
  1173.  
  1174. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True
  1175. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True
  1176. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid
  1177. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
  1178. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
  1179. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 10.0.0.1
  1180. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings external:br-ex
  1181.  
  1182. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
  1183. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True
  1184.  
  1185. # [所有控制节点]配置L3 agent,/etc/neutron/l3_agent.ini
  1186. openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
  1187. openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge
  1188.  
  1189. # [所有控制节点]配置DHCP agent,/etc/neutron/dhcp_agent.ini
  1190. openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
  1191. openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
  1192. openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
  1193.  
  1194. # [所有控制节点]配置metadata agent,/etc/neutron/metadata_agent.ini
  1195. openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.10.100
  1196. openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret openstack
  1197.  
  1198. # [所有控制节点]配置nova和neutron集成,/etc/nova/nova.conf
  1199. openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
  1200. openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
  1201. openstack-config --set /etc/nova/nova.conf neutron auth_type password
  1202. openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
  1203. openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
  1204. openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
  1205. openstack-config --set /etc/nova/nova.conf neutron project_name service
  1206. openstack-config --set /etc/nova/nova.conf neutron username neutron
  1207. openstack-config --set /etc/nova/nova.conf neutron password neutron
  1208.  
  1209. openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
  1210. openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret openstack
  1211.  
  1212. # [所有控制节点]配置L3 agent HA、/etc/neutron/neutron.conf
  1213. openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True
  1214. openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover True
  1215. openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
  1216. openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2
  1217.  
  1218. # [所有控制节点]配置DHCP agent HA、/etc/neutron/neutron.conf
  1219. openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3
  1220.  
  1221. # [所有控制节点] 配置Open vSwitch (OVS) 服务,创建网桥和端口
  1222. systemctl enable openvswitch.service
  1223. systemctl start openvswitch.service
  1224.  
  1225. # [所有控制节点] 创建ML2配置文件软连接
  1226. ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  1227.  
  1228. vim /etc/haproxy/haproxy.cfg
  1229. listen neutron_api_cluster
  1230. bind 192.168.10.100:9696
  1231. balance source
  1232. option tcpka
  1233. option httpchk
  1234. option tcplog
  1235. server controller01 192.168.10.101:9696 check inter 2000 rise 2 fall 5
  1236. server controller02 192.168.10.102:9696 check inter 2000 rise 2 fall 5
  1237. server controller03 192.168.10.103:9696 check inter 2000 rise 2 fall 5
  1238.  
  1239. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
  1240. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
  1241.  
  1242. # 备份原来配置文件
  1243. cp /etc/sysconfig/network-scripts/ifcfg-ens160 /etc/sysconfig/network-scripts/bak-ifcfg-ens160
  1244. echo "DEVICE=br-ex
  1245. DEVICETYPE=ovs
  1246. TYPE=OVSBridge
  1247. BOOTPROTO=static
  1248. IPADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep IPADDR|awk -F '=' '{print $2}')
  1249. NETMASK=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep NETMASK|awk -F '=' '{print $2}')
  1250. GATEWAY=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep GATEWAY|awk -F '=' '{print $2}')
  1251. DNS1=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep DNS1|awk -F '=' '{print $2}')
  1252. DNS2=218.2.2.2
  1253. ONBOOT=yes">/etc/sysconfig/network-scripts/ifcfg-br-ex
  1254.  
  1255. echo "TYPE=OVSPort
  1256. DEVICETYPE=ovs
  1257. OVS_BRIDGE=br-ex
  1258. NAME=ens160
  1259. DEVICE=ens160
  1260. ONBOOT=yes">/etc/sysconfig/network-scripts/ifcfg-ens160
  1261.  
  1262. ovs-vsctl add-br br-ex
  1263. ovs-vsctl add-port br-ex ens160
  1264.  
  1265. systemctl restart network.service
  1266.  
  1267. # 拷贝配置文件到其他控制节点并作相应修改
  1268. scp /etc/neutron/neutron.conf controller02:/etc/neutron/neutron.conf
  1269. scp /etc/neutron/neutron.conf controller03:/etc/neutron/neutron.conf
  1270.  
  1271. scp /etc/neutron/plugins/ml2/ml2_conf.ini controller02:/etc/neutron/plugins/ml2/ml2_conf.ini
  1272. scp /etc/neutron/plugins/ml2/ml2_conf.ini controller03:/etc/neutron/plugins/ml2/ml2_conf.ini
  1273.  
  1274. scp /etc/neutron/plugins/ml2/openvswitch_agent.ini controller02:/etc/neutron/plugins/ml2/openvswitch_agent.ini
  1275. scp /etc/neutron/plugins/ml2/openvswitch_agent.ini controller03:/etc/neutron/plugins/ml2/openvswitch_agent.ini
  1276.  
  1277. scp /etc/neutron/l3_agent.ini controller02:/etc/neutron/l3_agent.ini
  1278. scp /etc/neutron/l3_agent.ini controller03:/etc/neutron/l3_agent.ini
  1279.  
  1280. scp /etc/neutron/dhcp_agent.ini controller02:/etc/neutron/dhcp_agent.ini
  1281. scp /etc/neutron/dhcp_agent.ini controller03:/etc/neutron/dhcp_agent.ini
  1282.  
  1283. scp /etc/neutron/metadata_agent.ini controller02:/etc/neutron/metadata_agent.ini
  1284. scp /etc/neutron/metadata_agent.ini controller03:/etc/neutron/metadata_agent.ini
  1285.  
  1286. # [任一节点]生成数据库
  1287. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  1288.  
  1289. # [任一节点]添加pacemaker资源
  1290. pcs resource create neutron-server systemd:neutron-server op start timeout=90 --clone interleave=true
  1291. pcs constraint order start openstack-keystone-clone then neutron-server-clone
  1292.  
  1293. # 全局唯一克隆:参数globally-unique=true。这些资源各不相同。一个节点上运行的克隆实例与另一个节点上运行的实例不同,同一个节点上运行的任何两个实例也不同。
  1294. # clone-max: 在集群中最多能运行多少份克隆资源,默认和集群中的节点数相同; clone-node-max:每个节点上最多能运行多少份克隆资源,默认是1;
  1295. pcs resource create neutron-scale ocf:neutron:NeutronScale --clone globally-unique=true clone-max=3 interleave=true
  1296. pcs constraint order start neutron-server-clone then neutron-scale-clone
  1297.  
  1298. pcs resource create neutron-ovs-cleanup ocf:neutron:OVSCleanup --clone interleave=true
  1299. pcs resource create neutron-netns-cleanup ocf:neutron:NetnsCleanup --clone interleave=true
  1300. pcs resource create neutron-openvswitch-agent systemd:neutron-openvswitch-agent --clone interleave=true
  1301. pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true
  1302. pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true
  1303. pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent --clone interleave=true
  1304.  
  1305. pcs constraint order start neutron-scale-clone then neutron-ovs-cleanup-clone
  1306. pcs constraint colocation add neutron-ovs-cleanup-clone with neutron-scale-clone
  1307. pcs constraint order start neutron-ovs-cleanup-clone then neutron-netns-cleanup-clone
  1308. pcs constraint colocation add neutron-netns-cleanup-clone with neutron-ovs-cleanup-clone
  1309. pcs constraint order start neutron-netns-cleanup-clone then neutron-openvswitch-agent-clone
  1310. pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-netns-cleanup-clone
  1311. pcs constraint order start neutron-openvswitch-agent-clone then neutron-dhcp-agent-clone
  1312. pcs constraint colocation add neutron-dhcp-agent-clone with neutron-openvswitch-agent-clone
  1313. pcs constraint order start neutron-dhcp-agent-clone then neutron-l3-agent-clone
  1314. pcs constraint colocation add neutron-l3-agent-clone with neutron-dhcp-agent-clone
  1315. pcs constraint order start neutron-l3-agent-clone then neutron-metadata-agent-clone
  1316. pcs constraint colocation add neutron-metadata-agent-clone with neutron-l3-agent-clone
  1317.  
  1318. bash restart-pcs-cluster.sh
  1319.  
  1320. # [任一节点] 测试
  1321. soource keystonerc_admin
  1322. neutron ext-list
  1323. neutron agent-list
  1324. ovs-vsctl show
  1325. neutron agent-list
  1326. ```
  1327.  
  1328. ##### 5、安装配置dashboard集群
  1329.  
  1330. ```
  1331. # 所有节点安装
  1332. yum install -y openstack-dashboard
  1333.  
  1334. # [所有控制节点] 修改配置文件/etc/openstack-dashboard/local_settings
  1335. sed -i \
  1336. -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "'"192.168.10.101"'"#g' \
  1337. -e "s#ALLOWED_HOSTS.*#ALLOWED_HOSTS = ['*',]#g" \
  1338. -e "s#^CACHES#SESSION_ENGINE = 'django.contrib.sessions.backends.cache'\nCACHES#g#" \
  1339. -e "s#locmem.LocMemCache'#memcached.MemcachedCache',\n 'LOCATION' : [ 'controller01:11211', 'controller02:11211', 'controller03:11211', ]#g" \
  1340. -e 's#^OPENSTACK_KEYSTONE_URL =.*#OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST#g' \
  1341. -e "s/^#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT.*/OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True/g" \
  1342. -e 's/^#OPENSTACK_API_VERSIONS.*/OPENSTACK_API_VERSIONS = {\n "identity": 3,\n "image": 2,\n "volume": 2,\n}\n#OPENSTACK_API_VERSIONS = {/g' \
  1343. -e "s/^#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN.*/OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'/g" \
  1344. -e 's#^OPENSTACK_KEYSTONE_DEFAULT_ROLE.*#OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"#g' \
  1345. -e "s#^LOCAL_PATH.*#LOCAL_PATH = '/var/lib/openstack-dashboard'#g" \
  1346. -e "s#^SECRET_KEY.*#SECRET_KEY = '4050e76a15dfb7755fe3'#g" \
  1347. -e "s#'enable_ha_router'.*#'enable_ha_router': True,#g" \
  1348. -e 's#TIME_ZONE = .*#TIME_ZONE = "'"Asia/Shanghai"'"#g' \
  1349. /etc/openstack-dashboard/local_settings
  1350.  
  1351. scp /etc/openstack-dashboard/local_settings controller02:/etc/openstack-dashboard/local_settings
  1352. scp /etc/openstack-dashboard/local_settings controller03:/etc/openstack-dashboard/local_settings
  1353.  
  1354. # 在controller02上修改
  1355. sed -i -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "'"192.168.10.102"'"#g' /etc/openstack-dashboard/local_settings
  1356. # 在controiller03上修改
  1357. sed -i -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "'"192.168.10.103"'"#g' /etc/openstack-dashboard/local_settings
  1358.  
  1359. # [所有控制节点]
  1360. echo "COMPRESS_OFFLINE = True" >> /etc/openstack-dashboard/local_settings
  1361. python /usr/share/openstack-dashboard/manage.py compress
  1362.  
  1363. # [所有控制节点] 设置HTTPD在特定的IP上监听
  1364. sed -i -e 's/^Listen.*/Listen '"$(ip addr show dev br-ex scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g'|head -n 1)"':80/g' /etc/httpd/conf/httpd.conf
  1365.  
  1366. vim /etc/haproxy/haproxy.cfg
  1367. listen dashboard_cluster
  1368. bind 192.168.10.100:
  1369. balance source
  1370. option tcpka
  1371. option httpchk
  1372. option tcplog
  1373. server controller01 192.168.10.101: check inter rise fall
  1374. server controller02 192.168.10.102: check inter rise fall
  1375. server controller03 192.168.10.103: check inter rise fall
  1376.  
  1377. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
  1378. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
  1379. ```
  1380.  
  1381. ##### 、安装配置cinder
  1382.  
  1383. ```
  1384. # 所有控制节点
  1385. yum install -y openstack-cinder
  1386.  
  1387. # [任一节点]创建数据库
  1388. mysql -uroot -popenstack -e "CREATE DATABASE cinder;
  1389. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '"cinder"';
  1390. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '"cinder"';
  1391. FLUSH PRIVILEGES;"
  1392.  
  1393. # [任一节点]创建用户等
  1394. . /root/keystonerc_admin
  1395.  
  1396. # [任一节点]创建用户等
  1397. openstack user create --domain default --password cinder cinder
  1398. openstack role add --project service --user cinder admin
  1399. openstack service create --name cinder --description "OpenStack Block Storage" volume
  1400. openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
  1401.  
  1402. #创建cinder服务的API endpoints
  1403. openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
  1404. openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
  1405. openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
  1406. openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
  1407. openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
  1408. openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
  1409.  
  1410. #[所有控制节点] 修改/etc/cinder/cinder.conf
  1411. openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:cinder@controller/cinder
  1412. openstack-config --set /etc/cinder/cinder.conf database max_retries -
  1413.  
  1414. openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
  1415. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
  1416. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
  1417. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller01:,controller02:,controller03:
  1418. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
  1419. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
  1420. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
  1421. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
  1422. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
  1423. openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cinder
  1424.  
  1425. openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
  1426. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller01:,controller02:,controller03:
  1427. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues true
  1428. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_interval
  1429. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_backoff
  1430. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_max_retries
  1431. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_durable_queues true
  1432. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
  1433. openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password openstack
  1434.  
  1435. openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder
  1436. openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen $(ip addr show dev br-ex scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g'|head -n )
  1437. openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip $(ip addr show dev br-ex scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g'|head -n )
  1438. openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
  1439. openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
  1440.  
  1441. # [任一节点]生成数据库
  1442. su -s /bin/sh -c "cinder-manage db sync" cinder
  1443.  
  1444. # 所有控制节点修改计算节点配置
  1445. openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
  1446.  
  1447. # 重启计算节点 nova-api
  1448. # pcs resource restart openstack-nova-api-clone
  1449.  
  1450. # 安装配置存储节点 ,存储节点和控制节点复用
  1451. # 所有节点
  1452. yum install lvm2 -y
  1453. systemctl enable lvm2-lvmetad.service
  1454. systemctl start lvm2-lvmetad.service
  1455.  
  1456. pvcreate /dev/sdb
  1457. vgcreate cinder-volumes /dev/sdb
  1458.  
  1459. yum install openstack-cinder targetcli python-keystone -y
  1460.  
  1461. # 所有控制节点修改部分配置文件
  1462. openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
  1463. openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
  1464. openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
  1465. openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
  1466. openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
  1467.  
  1468. # 增加haproxy.cfg配置文件
  1469. vim /etc/haproxy/haproxy.cfg
  1470. listen cinder_api_cluster
  1471. bind 192.168.10.100:
  1472. balance source
  1473. option tcpka
  1474. option httpchk
  1475. option tcplog
  1476. server controller01 192.168.10.101: check inter rise fall
  1477. server controller02 192.168.10.102: check inter rise fall
  1478. server controller03 192.168.10.103: check inter rise fall
  1479.  
  1480. scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
  1481. scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
  1482.  
  1483. # [任一节点]添加pacemaker资源
  1484. pcs resource create openstack-cinder-api systemd:openstack-cinder-api --clone interleave=true
  1485. pcs resource create openstack-cinder-scheduler systemd:openstack-cinder-scheduler --clone interleave=true
  1486. pcs resource create openstack-cinder-volume systemd:openstack-cinder-volume
  1487.  
  1488. pcs constraint order start openstack-keystone-clone then openstack-cinder-api-clone
  1489. pcs constraint order start openstack-cinder-api-clone then openstack-cinder-scheduler-clone
  1490. pcs constraint colocation add openstack-cinder-scheduler-clone with openstack-cinder-api-clone
  1491. pcs constraint order start openstack-cinder-scheduler-clone then openstack-cinder-volume
  1492. pcs constraint colocation add openstack-cinder-volume with openstack-cinder-scheduler-clone
  1493.  
  1494. # 重启集群
  1495. bash restart-pcs-cluster.sh
  1496. # [任一节点]测试
  1497. . /root/keystonerc_admin
  1498. cinder service-list
  1499. ```
  1500. #### 、安装配置ceilometer和aodh集群
  1501. ##### 7.1 安装配置ceilometer集群
  1502.  
  1503. 实在无力吐槽这个项目,所以不想写了
  1504.  
  1505. ##### 7.2 安装配置aodh集群
  1506.  
  1507. 实在无力吐槽这个项目,所以不想写了
  1508.  
  1509. #### 四、安装配置计算节点
  1510. ##### 4.1 OpenStack Compute service
  1511. ```
  1512.  
  1513. # 所有计算节点
  1514. yum install -y openstack-nova-compute
  1515.  
  1516. # 修改配置文件/etc/nova/nova.conf
  1517. openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $(ip addr show dev ens160 scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g')
  1518. openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
  1519. openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
  1520. openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:,controller02:,controller03:
  1521.  
  1522. openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
  1523. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:,controller02:,controller03:
  1524. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true
  1525. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval
  1526. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff
  1527. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries
  1528. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true
  1529. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
  1530. openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password openstack
  1531.  
  1532. openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
  1533. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
  1534. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
  1535. openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:,controller02:,controller03:
  1536. openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
  1537. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
  1538. openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
  1539. openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
  1540. openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
  1541. openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
  1542.  
  1543. openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
  1544.  
  1545. openstack-config --set /etc/nova/nova.conf vnc enabled True
  1546. openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
  1547. openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $(ip addr show dev ens160 scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g')
  1548. openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.10.100:6080/vnc_auto.html
  1549.  
  1550. openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
  1551. openstack-config --set /etc/nova/nova.conf libvirt virt_type $(count=$(egrep -c '(vmx|svm)' /proc/cpuinfo); if [ $count -eq ];then echo "qemu"; else echo "kvm"; fi)
  1552.  
  1553. # 打开虚拟机迁移的监听端口
  1554. sed -i -e "s#\#listen_tls *= *0#listen_tls = 0#g" /etc/libvirt/libvirtd.conf
  1555. sed -i -e "s#\#listen_tcp *= *1#listen_tcp = 1#g" /etc/libvirt/libvirtd.conf
  1556. sed -i -e "s#\#auth_tcp *= *\"sasl\"#auth_tcp = \"none\"#g" /etc/libvirt/libvirtd.conf
  1557. sed -i -e "s#\#LIBVIRTD_ARGS *= *\"--listen\"#LIBVIRTD_ARGS=\"--listen\"#g" /etc/sysconfig/libvirtd
  1558.  
  1559. #启动服务
  1560. systemctl enable libvirtd.service openstack-nova-compute.service
  1561. systemctl start libvirtd.service openstack-nova-compute.service
  1562. ```
  1563.  
  1564. ##### 4.2 OpenStack Network service
  1565.  
  1566. ```
  1567. # 安装组件
  1568. yum install -y openstack-neutron-openvswitch ebtables ipset
  1569. yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
  1570.  
  1571. # 修改/etc/neutron/neutron.conf
  1572.  
  1573. openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
  1574. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:,controller02:,controller03:
  1575. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
  1576. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval
  1577. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff
  1578. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries
  1579. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true
  1580. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
  1581. openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password openstack
  1582.  
  1583. openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
  1584. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
  1585. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
  1586. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:,controller02:,controller03:
  1587. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
  1588. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
  1589. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
  1590. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
  1591. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
  1592. openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
  1593.  
  1594. openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
  1595.  
  1596. ### 配置Open vSwitch agent,/etc/neutron/plugins/ml2/openvswitch_agent.ini,注意,此处填写第二块网卡
  1597. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True
  1598. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True
  1599. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid
  1600.  
  1601. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip $(ip addr show dev ens192 scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g')
  1602.  
  1603. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
  1604. openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True
  1605.  
  1606. ### 配置nova和neutron集成,/etc/nova/nova.conf
  1607. openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
  1608. openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
  1609. openstack-config --set /etc/nova/nova.conf neutron auth_type password
  1610. openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
  1611. openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
  1612. openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
  1613. openstack-config --set /etc/nova/nova.conf neutron project_name service
  1614. openstack-config --set /etc/nova/nova.conf neutron username neutron
  1615. openstack-config --set /etc/nova/nova.conf neutron password neutron
  1616.  
  1617. ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  1618.  
  1619. systemctl restart openstack-nova-compute.service
  1620. systemctl start openvswitch.service
  1621. systemctl restart neutron-openvswitch-agent.service
  1622.  
  1623. systemctl enable openvswitch.service
  1624. systemctl enable neutron-openvswitch-agent.service
  1625. ```
  1626.  
  1627. #### 五 修补
  1628. 控制节点:
  1629.  
  1630. ```
  1631. GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller01' IDENTIFIED BY "openstack";
  1632. GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller02' IDENTIFIED BY "openstack";
  1633. GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller03' IDENTIFIED BY "openstack";
  1634. GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.10.101' IDENTIFIED BY "openstack";
  1635. GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.10.102' IDENTIFIED BY "openstack";
  1636. GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.10.103' IDENTIFIED BY "openstack";
  1637. ```
  1638.  
  1639. rabbitmq集群相关:
  1640.  
  1641. ```
  1642. /sbin/service rabbitmq-server stop
  1643. /sbin/service rabbitmq-server start
  1644. ```
  1645.  
  1646. ```
  1647. # 设置资源超时时间
  1648. pcs resource op defaults timeout=90s
  1649.  
  1650. # 清除错误
  1651. pcs resource cleanup openstack-keystone-clone
  1652. ```
  1653.  
  1654. ##### mariadb集群排错
  1655.  
  1656. ```
  1657. 报错描述如下:节点启动不了,查看 tailf /var/log/messages日志发现如下报错:
  1658. [ERROR] WSREP: gcs/src/gcs_group.cpp:group_post_state_exchange():
  1659. 解决错误: rm -f /var/lib/mysql/grastate.dat
  1660. 然后重启服务即可
  1661. ```
  1662.  
  1663. #### 六 增加dvr功能
  1664. ##### 6.1 控制节点配置
  1665.  
  1666. ```
  1667. vim /etc/neutron/neutron.conf
  1668. [DEFAULT]
  1669. router_distributed = true
  1670.  
  1671. vim /etc/neutron/plugins/ml2/ml2_conf.ini
  1672. mechanism_drivers = openvswitch,linuxbridge,l2population
  1673.  
  1674. vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
  1675. [DEFAULT]
  1676. enable_distributed_routing = true
  1677. [agent]
  1678. l2_population = True
  1679.  
  1680. vim /etc/neutron/l3_agent.ini
  1681. [DEFAULT]
  1682. agent_mode = dvr_snat
  1683.  
  1684. vim /etc/openstack-dashboard/local_settings
  1685. 'enable_distributed_router': True,
  1686.  
  1687. 重启控制节点neutron相关服务,重启httpd服务
  1688. ```
  1689.  
  1690. ##### 6.2 计算节点配置
  1691.  
  1692. ```
  1693. vim /etc/neutron/plugins/ml2/ml2_conf.ini
  1694. [ml2]
  1695. mechanism_drivers = openvswitch,l2population
  1696.  
  1697. vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
  1698. [DEFAULT]
  1699. enable_distributed_routing = true
  1700. [agent]
  1701. l2_population = True
  1702.  
  1703. vim /etc/neutron/l3_agent.ini
  1704. [DEFAULT]
  1705. interface_driver = openvswitch
  1706. external_network_bridge =
  1707. agent_mode = dvr
  1708.  
  1709. 重启neutron相关服务
  1710.  
  1711. ovs-vsctl add-br br-ex
  1712. ovs-vsctl add-port br-ex ens160
  1713. openstack-service restart neutron
  1714. ```
  1715.  
  1716. 关于rabbitmq连接数限制问题:
  1717.  
  1718. ```
  1719. [root@controller01 ~]# cat /etc/security/limits.d/-nproc.conf
  1720. # Default limit for number of user's processes to prevent
  1721. # accidental fork bombs.
  1722. # See rhbz # for reasoning.
  1723. * soft nproc
  1724. root soft nproc unlimited
  1725. * soft nofile
  1726. * hard nofile
  1727.  
  1728. [root@controller01 ~]#ulimit -n
  1729.  
  1730. [root@controller01 ~]#cat /usr/lib/systemd/system/rabbitmq-server.service
  1731. [Service]
  1732. LimitNOFILE= #在启动脚本中添加此参数
  1733. [root@controller01 ~]#systemctl daemon-reload
  1734. [root@controller01 ~]#systemctl restart rabbitmq-server.service
  1735.  
  1736. [root@controller01 ~]#rabbitctl status
  1737. {file_descriptors,[{total_limit,},
  1738. {total_used,},
  1739. {sockets_limit,},
  1740. {sockets_used,}]}
  1741. ```
  1742.  
  1743. #### 关于高可用路由器
  1744. 只能在系统管理员页面上创建高可用或者DVR分布式路由器
  1745.  
  1746. # 关于image镜像共享
  1747. 把控制节点中的/var/lib/glance/images 镜像目录共享出来。
  1748.  
  1749. yum -y install nfs-utils rpcbind -y
  1750. mkdir /opt/glance/images/ -p
  1751. vim /etc/exports
  1752. /opt/glance/images/ 10.128.246.0/(rw,no_root_squash,no_all_squash,sync)
  1753.  
  1754. exportfs -r
  1755. systemctl enable rpcbind.service
  1756. systemctl start rpcbind.service
  1757. systemctl enable nfs-server.service
  1758. systemctl start nfs-server.service
  1759.  
  1760. 2nova节点查看
  1761. showmount -e 10.128.247.153
  1762.  
  1763. # 三个控制节点挂载
  1764. mount -t nfs 10.128.247.153:/opt/glance/images/ /var/lib/glance/iamges/
  1765.  
  1766. chown -R glance.glance /opt/glance/images/
  1767.  
  1768. ##########
  1769. 普通用户创建HA路由器
  1770. ```
  1771. neutron router-create router_demo --ha True
  1772. ```

OpenStack Mitaka HA部署方案(随笔)的更多相关文章

  1. Neutron 物理部署方案 - 每天5分钟玩转 OpenStack(68)

    前面我们讨论了 Neutron 的架构,本节讨论 Neutron 的物理部署方案:不同节点部署不同的 Neutron 服务组件. 方案1:控制节点 + 计算节点 在这个部署方案中,OpenStack ...

  2. openstack项目【day24】:OpenStack mitaka部署

    前言: openstack的部署非常简单,简单的前提建立在扎实的理论功底,本人一直觉得,玩技术一定是理论指导实践,网上遍布个种搭建方法都可以实现一个基本的私有云环境,但是诸位可曾发现,很多配置都是重复 ...

  3. OpenStack Mitaka安装

    http://egon09.blog.51cto.com/9161406/1839667 前言: openstack的部署非常简单,简单的前提建立在扎实的理论功底,本人一直觉得,玩技术一定是理论指导实 ...

  4. 在Openstack H版部署Nova Cell 时 ,终端输入nova service-list 和 nova host-list 命令将报错

    关于Cell的基本介绍,可以参考贤哥的一篇文章: [OpenStack]G版中关于Nova的Cell  http://blog.csdn.net/lynn_kong/article/details/8 ...

  5. openstack controller ha测试环境搭建记录(一)——操作系统准备

    为了初步了解openstack controller ha的工作原理,搭建测试环境进行学习. 在学习该方面知识时,当前采用的操作系统版本是centos 7.1 x64.首先在ESXi中建立2台用于测试 ...

  6. Hadoop2.0 Namenode HA实现方案

    Hadoop2.0 Namenode HA实现方案介绍及汇总 基于社区最新release的Hadoop2.2.0版本,调研了hadoop HA方面的内容.hadoop2.0主要的新特性(Hadoop2 ...

  7. eql高可用部署方案

    运行环境 服务器两台(后面的所有配置案例都是以10.96.0.64和10.96.0.66为例) 操作系统CentOS release 6.2 必须要有共同的局域网网段 两台服务器都要安装keepali ...

  8. Win10+VirtualBox+Openstack Mitaka

    首先VirtualBox安装的话,没有什么可演示的,去官网(https://www.virtualbox.org/wiki/Downloads)下载,或者可以去(https://www.virtual ...

  9. openstack高可用集群21-生产环境高可用openstack集群部署记录

    第一篇 集群概述 keepalived + haproxy +Rabbitmq集群+MariaDB Galera高可用集群   部署openstack时使用单个控制节点是非常危险的,这样就意味着单个节 ...

随机推荐

  1. Python--(并发编程之线程Part2)

    GIL只能保证垃圾回收机制的安全,进程中的数据安全还是需要自定义锁 线程执行代码首先要抢到GIL全局锁,假设线程X首先抢到,以为要抢到自定义锁要执行代码,所以这个线程在执行代码的时候就很容抢到了自定义 ...

  2. Codeforce 475 C. Kamal-ol-molk&#39;s Painting

    从最左上的点開始枚举长宽.... C. Kamal-ol-molk's Painting time limit per test 2 seconds memory limit per test 256 ...

  3. oracle入门(1)——安装oracle 11g x64 for windows

    [本文简介] 最近因为一个项目的需要,从零学习起了oracle,现在把学到的东西记录分享一下. 首先是安装篇,在win8 装10G 一直失败,网上各种方法都试过了,最后不得不放弃,选择了11G. 11 ...

  4. zz Qt下 QString转char*和char []

    以下内容为转载:http://www.cnblogs.com/Romi/archive/2012/03/12/2392478.html -------------------------------- ...

  5. mysql第二天作业

    create database 数据库名 default charset utf8;use 数据库名;1.创建成绩表,字段包括:学生姓名,语文成绩,数学成绩,英语成绩create table resu ...

  6. CodeMatic动软自动生成Nhibernate

    前两天调查了下自动生成工具MyGeneration和codesmith前一个版本已经不更新了后面一个太高级生成 的代码包含了太多东西,没整明白.不过生成的xmlmapping很强大.所以干脆整合一下c ...

  7. Word 中设置图、表、公式、代码要与正文之间行间距

    一.概述 在撰写论文等文档时,常常对图.表.公式.代码要与正文之间行间距有要求.例如: (5)图.表.公式.代码要与正文之间有6磅的行间距. 二.设置方式 选中 图/表/公式/代码 与 图题/表头/- ...

  8. POJ_2533 Longest Ordered Subsequence【DP】【最长上升子序列】

    POJ_2533 Longest Ordered Subsequence[DP][最长递增子序列] Longest Ordered Subsequence Time Limit: 2000MS Mem ...

  9. 移动端1px的border

    移动端浏览器解决1px的底部border问题 1.使用border:1px solid #e0e0e0. 在不同设备下由于devicePixelRatio不同导致1px实际显示的长度不同.所以在移动端 ...

  10. Windows 2008下系统网站运行环境的搭建

    1.右击[计算机]-->[管理],进入到”服务器管理器” 界面,如图所示: 2.依次展开[角色]-->[Web服务器(IIS)]-->[Internet 信息服务(IIS)管理器], ...