首先VirtualBox安装的话,没有什么可演示的,去官网(https://www.virtualbox.org/wiki/Downloads)下载,或者可以去(https://www.virtualbox.org/wiki/Download_Old_Builds)下载旧版本。

接下来设置virtualbox的网络

这里需要注意的是IP地址栏中的信息,必须全部删除然后切换为英文输入法,再次输入。

接下来配置Host-Only

以下是确认没有启用DHCP

接下来就是安装ubuntu了,

点击新建虚拟机,选择linux,发行版本选择ubuntu 64 bit

这里安装过程不再演示,但是在配置网络的时候要安装如下所示配置

网卡2的配置如下

接下来就是添加存储,选择之前下载好的ubuntu-14.04.5-server-amd64.iso镜像文件,下载地址(http://mirrors.aliyun.com/ubuntu-releases/14.04/ubuntu-14.04.5-server-amd64.iso

点击“OK”之后,开启虚拟机即可开始安装

语言:English(回车)

Ubuntu: Install Ubuntu Server(回车)

接下来直接敲回车即可,直到:

由于需要使用Nat访问外网,所以这里选择eth0.回车之后,直接选择‘cancel’,回车会告警,忽略这个告警直接点击“continue”,会提示让配置网络,选择手动配置,回车:

IP address:10.0.3.10

Netmask: 255.255.255.0

Gateway:10.0.30.1

Name server addresses: 114.114.114.114

Hostname: controller

Domain name: 不设置,直接回车即可,

Full name for the new user: openstack

Username for your account: openstack

Choose a password for the new user: 123456

Re-enter password to verify: 123456

Use weak password? 选择“yes”,回车

Encrypt your home directory? 选择“No”,回车

接下来需要确认当前的时区是上海,如果是上海,选择“yes”进行下一步;不是上海选择“No”,然后在列表中选择上海。

在Partition disks选项中,选择“Guided - user entire disk",然后回车,回车,出现如下所示,选择“Yes”,回车

Configure the package manager: 不设置HTTP proxy,直接选择continue,回车

Configuring apt两步直接回车取消掉即可

Configuring taskel: No automatic updates, 回车之后选择安装OpenSSH server

安装已完成,系统会自动重启,重启完成,关机,然后进行克隆操作:

选择“完全复制”。

接下来开始配置系统环境,选择刚刚创建好的虚拟机,点击启动,然后找到这个网址(https://github.com/JiYou/openstack-m/blob/master/os/interfaces)这是网卡配置文件,接下来开始查看并编辑网卡配置文件interfaces

  1. openstack@controller:~$ cat /etc/network/interfaces
  2. # This file describes the network interfaces available on your system
  3. # and how to activate them. For more information, see interfaces().
  4.  
  5. # The loopback network interface
  6. auto lo
  7. iface lo inet loopback
  8.  
  9. # The primary network interface
  10. auto eth0
  11. iface eth0 inet static
  12. address 10.0.3.10
  13. netmask 255.255.255.0
  14. network 10.0.3.0
  15. broadcast 10.0.3.255
  16. gateway 10.0.3.1
  17. # dns-* options are implemented by the resolvconf package, if installed
  18. dns-nameservers 114.114.114.114
  19. auto eth1
  20. iface eth1 inet static
  21. address 192.168.56.10
  22. netmask 255.255.255.0
  23. gateway 192.168.56.1
  24. dns-nameservers 114.114.114.114

重启系统生效,然后使用xshell、putty或其他远程管理工具,我这里使用的是Gitbash,连接测试

  1. xueji@xueji MINGW64 ~
  2. $ ssh openstack@192.168.56.10
  3. The authenticity of host '192.168.56.10 (192.168.56.10)' can't be established.
  4. ECDSA key fingerprint is SHA256:DvbqAHwl6bcmX3FcvaJZ1REpRR8Oup89ST+a8WFBY7Y.
  5. Are you sure you want to continue connecting (yes/no)? yes
  6. Warning: Permanently added '192.168.56.10' (ECDSA) to the list of known hosts.
  7. openstack@192.168.56.10's password:
  8. Welcome to Ubuntu 14.04. LTS (GNU/Linux 4.4.--generic x86_64)
  9.  
  10. * Documentation: https://help.ubuntu.com/
  11.  
  12. System information as of Tue Jan :: CST
  13.  
  14. System load: 0.11 Processes:
  15. Usage of /: 0.6% of .78GB Users logged in:
  16. Memory usage: % IP address for eth0: 10.0.3.10
  17. Swap usage: % IP address for eth1: 192.168.56.10
  18.  
  19. Graph this data and manage this system at:
  20. https://landscape.canonical.com/
  21.  
  22. packages can be updated.
  23. updates are security updates.
  24.  
  25. New release '16.04.5 LTS' available.
  26. Run 'do-release-upgrade' to upgrade to it.
  27.  
  28. Last login: Tue Jan ::
  29. openstack@controller:~$ ifconfig

登录成功,

接下来开始准备openstack的包

  1. openstack@controller:~$ sudo -s
  2. [sudo] password for openstack:
  3. root@controller:~# apt-get update
  4. root@controller:~# apt-get install -y software-properties-common
  5. root@controller:~# add-apt-repository cloud-archive:mitaka
  6. Ubuntu Cloud Archive for OpenStack Mitaka
  7. More info: https://wiki.ubuntu.com/ServerTeam/CloudArchive
  8. Press [ENTER] to continue or ctrl-c to cancel adding it
  9. # 回车
  10. Reading package lists...
  11. Building dependency tree...
  12. Reading state information...
  13. The following NEW packages will be installed:
  14. ubuntu-cloud-keyring
  15. upgraded, newly installed, to remove and not upgraded.
  16. Need to get , B of archives.
  17. After this operation, 34.8 kB of additional disk space will be used.
  18. Get: http://us.archive.ubuntu.com/ubuntu/ trusty/universe ubuntu-cloud-keyring all 2012.08.14 [5,086 B]
  19. Fetched , B in 0s (11.0 kB/s)
  20. Selecting previously unselected package ubuntu-cloud-keyring.
  21. (Reading database ... files and directories currently installed.)
  22. Preparing to unpack .../ubuntu-cloud-keyring_2012..14_all.deb ...
  23. Unpacking ubuntu-cloud-keyring (2012.08.) ...
  24. Setting up ubuntu-cloud-keyring (2012.08.) ...
  25. Importing ubuntu-cloud.archive.canonical.com keyring
  26. OK
  27. Processing ubuntu-cloud.archive.canonical.com removal keyring
  28. gpg: /etc/apt/trustdb.gpg: trustdb created
  29. OK
  30.  
  31. root@controller:~# apt-get update && apt-get dist-upgrade
  32. root@controller:~# apt-get install -y python-openstackclient

安装NTP、MySQL

  1. root@controller:~# hostname -I
  2. 10.0.3.10 192.168.56.10
  3. root@controller:~# tail -n - /etc/hosts
  4. 10.0.3.10 controller
  5. 192.168.56.10 controller
  6.  
  7. root@controller:~# vim /etc/chrony/chrony.conf
  8. # 注释掉以下四行,接着在下面添加server controller iburst
  9. #server .debian.pool.ntp.org offline minpoll
  10. #server .debian.pool.ntp.org offline minpoll
  11. #server .debian.pool.ntp.org offline minpoll
  12. #server .debian.pool.ntp.org offline minpoll
  13. server controller iburst
  14.  
  15. root@controller:~# chronyc sources
  16. Number of sources =
  17. MS Name/IP address Stratum Poll Reach LastRx Last sample
  18. ===============================================================================
  19. ^? controller 10y +0ns[ +0ns] +/- 0ns
  20.  
  21. 安装mysql
  22. root@controller:~# apt-get install -y mariadb-server python-pymysql
  23. 在弹出的mysql数据库密码输入框中输入123456
  24. root@controller:~# cd /etc/mysql/
  25. root@controller:/etc/mysql# ls
  26. conf.d debian.cnf debian-start my.cnf
  27. root@controller:/etc/mysql# cp my.cnf{,.bak}
  28. root@controller:/etc/mysql# vim my.cnf
  29. [mysqld] #该行下面添加如下四行内容
  30. default-storage-engine = innodb
  31. innodb_file_per_table
  32. max_connections =
  33. collation-server = utf8_general_ci
  34. character-set-server = utf8
  35.  
  36. bind-address = 0.0.0.0 #原值是127.0.0.
  37. 重启mysql
  38. root@controller:/etc/mysql# service mariadb restart
  39. mariadb: unrecognized service
  40. root@controller:/etc/mysql# service mysql restart
  41. * Stopping MariaDB database server mysqld [ OK ]
  42. * Starting MariaDB database server mysqld [ OK ]
  43. * Checking for corrupt, not cleanly closed and upgrade needing tables.
  44. 安全初始化
  45. root@controller:/etc/mysql# mysql_secure_installation
  46. /usr/bin/mysql_secure_installation: : /usr/bin/mysql_secure_installation: find_mysql_client: not found
  47.  
  48. NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
  49. SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
  50.  
  51. In order to log into MariaDB to secure it, we'll need the current
  52. password for the root user. If you've just installed MariaDB, and
  53. you haven't set the root password yet, the password will be blank,
  54. so you should just press enter here.
  55.  
  56. Enter current password for root (enter for none):
  57. OK, successfully used password, moving on...
  58.  
  59. Setting the root password ensures that nobody can log into the MariaDB
  60. root user without the proper authorisation.
  61.  
  62. You already have a root password set, so you can safely answer 'n'.
  63.  
  64. Change the root password? [Y/n] n
  65. ... skipping.
  66.  
  67. By default, a MariaDB installation has an anonymous user, allowing anyone
  68. to log into MariaDB without having to have a user account created for
  69. them. This is intended only for testing, and to make the installation
  70. go a bit smoother. You should remove them before moving into a
  71. production environment.
  72.  
  73. Remove anonymous users? [Y/n] n
  74. ... skipping.
  75.  
  76. Normally, root should only be allowed to connect from 'localhost'. This
  77. ensures that someone cannot guess at the root password from the network.
  78.  
  79. Disallow root login remotely? [Y/n] n
  80. ... skipping.
  81.  
  82. By default, MariaDB comes with a database named 'test' that anyone can
  83. access. This is also intended only for testing, and should be removed
  84. before moving into a production environment.
  85.  
  86. Remove test database and access to it? [Y/n] n
  87. ... skipping.
  88.  
  89. Reloading the privilege tables will ensure that all changes made so far
  90. will take effect immediately.
  91.  
  92. Reload privilege tables now? [Y/n] y
  93. ... Success!
  94.  
  95. Cleaning up...
  96.  
  97. All done! If you've completed all of the above steps, your MariaDB
  98. installation should now be secure.
  99.  
  100. Thanks for using MariaDB!
  101. 测试连接
  102. root@controller:/etc/mysql# mysql -uroot -p123456
  103. Welcome to the MariaDB monitor. Commands end with ; or \g.
  104. Your MariaDB connection id is
  105. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  106.  
  107. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  108.  
  109. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  110.  
  111. MariaDB [(none)]> show databases;
  112. +--------------------+
  113. | Database |
  114. +--------------------+
  115. | information_schema |
  116. | mysql |
  117. | performance_schema |
  118. +--------------------+
  119. rows in set (0.00 sec)
  120.  
  121. MariaDB [(none)]> \q
  122. Bye
  123. root@controller:/etc/mysql# mysql -uroot -p123456 -h10.0.3.
  124. Welcome to the MariaDB monitor. Commands end with ; or \g.
  125. Your MariaDB connection id is
  126. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  127.  
  128. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  129.  
  130. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  131.  
  132. MariaDB [(none)]> show databases;
  133. +--------------------+
  134. | Database |
  135. +--------------------+
  136. | information_schema |
  137. | mysql |
  138. | performance_schema |
  139. +--------------------+
  140. rows in set (0.00 sec)
  141.  
  142. MariaDB [(none)]> \q
  143. Bye
  144. root@controller:/etc/mysql# mysql -uroot -p123456 -h192.168.56.
  145. Welcome to the MariaDB monitor. Commands end with ; or \g.
  146. Your MariaDB connection id is
  147. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  148.  
  149. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  150.  
  151. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  152.  
  153. MariaDB [(none)]> show databases;
  154. +--------------------+
  155. | Database |
  156. +--------------------+
  157. | information_schema |
  158. | mysql |
  159. | performance_schema |
  160. +--------------------+
  161. rows in set (0.00 sec)
  162.  
  163. MariaDB [(none)]> \q
  164. Bye
  165.  
  166. root@controller:/etc/mysql# mysql -uroot -p123456 -h127.0.0.
  167. Welcome to the MariaDB monitor. Commands end with ; or \g.
  168. Your MariaDB connection id is
  169. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  170.  
  171. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  172.  
  173. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  174.  
  175. MariaDB [(none)]> show databases;
  176. +--------------------+
  177. | Database |
  178. +--------------------+
  179. | information_schema |
  180. | mysql |
  181. | performance_schema |
  182. +--------------------+
  183. rows in set (0.00 sec)
  184.  
  185. MariaDB [(none)]> \q
  186. Bye

安装mongodb

  1. root@controller:~# apt-get install -y mongodb-server mongodb-clients python-pymongo
  2. root@controller:~# cp /etc/mongodb.conf{,.bak}
  3. root@controller:~# vim /etc/mongodb.conf
  4.  
  5. bind_ip = 0.0.0.0 #原值127.0.0.
  6. smallfiles = true #添加此行内容
  7. root@controller:~# service mongodb stop
  8. mongodb stop/waiting
  9. root@controller:~# ls /var/lib/mongodb/journal/
  10. # 如果这个目录下有prealloc开头的文件,全部删除
  11. root@controller:~# service mongodb start
  12. mongodb start/running, process

安装rabbitmq

  1. root@controller:~# apt-get install -y rabbitmq-server
  2. 添加openstack用户
  3. root@controller:~# rabbitmqctl add_user openstack
  4. Creating user "openstack" ...
  5. 赋予“openstack”用户读写权限
  6. root@controller:~# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  7. Setting permissions for user "openstack" in vhost "/" ...

安装memecached

  1. root@controller:~# apt-get install -y memcached python-memcache
  2. root@controller:~# cp /etc/memcached.conf{,.bak}
  3. root@controller:~# vim /etc/memcached.conf
  4.  
  5. -l 0.0.0.0 #原值127.0.0.
  6. 重启memcache
  7. root@controller:~# service memcached restart
  8. Restarting memcached: memcached.
  9. root@controller:~# service memcached status
  10. * memcached is running
  11. root@controller:~# ps aux | grep memcached
  12. memcache 0.0 0.0 ? Sl : : /usr/bin/memcached -m -p -u memcache -l 0.0.0.0
  13. root 0.0 0.0 pts/ S+ : : grep --color=auto memcached

开始安装keystone

  1. root@controller:~# mysql -uroot -p123456
  2. Welcome to the MariaDB monitor. Commands end with ; or \g.
  3. Your MariaDB connection id is
  4. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  5.  
  6. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  7.  
  8. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  9.  
  10. MariaDB [(none)]> create database keystone;
  11. Query OK, row affected (0.00 sec)
  12.  
  13. MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'localhost' identified by '';
  14. Query OK, rows affected (0.00 sec)
  15.  
  16. MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'%' identified by '';
  17. Query OK, rows affected (0.00 sec)
  18.  
  19. MariaDB [(none)]> \q
  20. Bye
  21. root@controller:~# mysql -ukeystone -p123456 -h 127.0.0.1
  22. Welcome to the MariaDB monitor. Commands end with ; or \g.
  23. Your MariaDB connection id is
  24. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  25.  
  26. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  27.  
  28. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  29.  
  30. MariaDB [(none)]> show databases;
  31. +--------------------+
  32. | Database |
  33. +--------------------+
  34. | information_schema |
  35. | keystone |
  36. +--------------------+
  37. rows in set (0.00 sec)
  38.  
  39. MariaDB [(none)]> \q
  40. Bye
  41. root@controller:~# mysql -ukeystone -p123456 -h 10.0.3.10
  42. Welcome to the MariaDB monitor. Commands end with ; or \g.
  43. Your MariaDB connection id is
  44. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  45.  
  46. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  47.  
  48. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  49.  
  50. MariaDB [(none)]> show databases;
  51. +--------------------+
  52. | Database |
  53. +--------------------+
  54. | information_schema |
  55. | keystone |
  56. +--------------------+
  57. rows in set (0.00 sec)
  58.  
  59. MariaDB [(none)]> \q
  60. Bye
  61. root@controller:~# mysql -ukeystone -p123456 -h 192.168.56.10
  62. Welcome to the MariaDB monitor. Commands end with ; or \g.
  63. Your MariaDB connection id is
  64. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  65.  
  66. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  67.  
  68. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  69.  
  70. MariaDB [(none)]> show databases;
  71. +--------------------+
  72. | Database |
  73. +--------------------+
  74. | information_schema |
  75. | keystone |
  76. +--------------------+
  77. rows in set (0.00 sec)
  78.  
  79. MariaDB [(none)]> \q
  80. Bye
  81. # 连接都没问题
  82. 接着安装keystone软件包
  83. root@controller:~# echo "manual" > /etc/init/keystone.override
  84. root@controller:~# apt-get install keystone apache2 libapache2-mod-wsgi
  85.  
  86. 配置keystone.conf
  87. root@controller:~# cp /etc/keystone/keystone.conf{,.bak}
  88. root@controller:~# vim /etc/keystone/keystone.conf
  89.  
  90. admin_token =
  91. connection = mysql+pymysql://keystone:123456@controller/keystone
  92.  
  93. provider = fernet
  94. # 同步数据库
  95. root@controller:~# su -s /bin/sh -c "keystone-manage db_sync" keystone
  96. 初始化fernet-keys
  97. root@controller:~# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  98. -- ::34.134 INFO keystone.token.providers.fernet.utils [-] [fernet_tokens] key_repository does not appear to exist; attempting to create it
  99. -- ::34.135 INFO keystone.token.providers.fernet.utils [-] Created a new key: /etc/keystone/fernet-keys/
  100. -- ::34.135 INFO keystone.token.providers.fernet.utils [-] Starting key rotation with key files: ['/etc/keystone/fernet-keys/0']
  101. -- ::34.135 INFO keystone.token.providers.fernet.utils [-] Current primary key is:
  102. -- ::34.136 INFO keystone.token.providers.fernet.utils [-] Next primary key will be:
  103. -- ::34.136 INFO keystone.token.providers.fernet.utils [-] Promoted key to be the primary:
  104. -- ::34.137 INFO keystone.token.providers.fernet.utils [-] Created a new key: /etc/keystone/fernet-keys/
  105. root@controller:~# echo $?
  106.  
  107. 配置Apache HTTP
  108. root@controller:~# cp /etc/apache2/apache2.conf{,.bak}
  109. root@controller:~# vim /etc/apache2/apache2.conf
  110. root@controller:~# grep 'ServerName' /etc/apache2/apache2.conf
  111. ServerName controller #末尾添加此行

配置Apache HTPP

  1. root@controller:~# cp /etc/apache2/apache2.conf{,.bak}
  2. root@controller:~# vim /etc/apache2/apache2.conf
  3. root@controller:~# grep 'ServerName' /etc/apache2/apache2.conf
  4. ServerName controller
  5. 接着创建wsgi-keystone.conf文件
  6. root@controller:~# vim /etc/apache2/sites-available/wsgi-keystone.conf
  7. Listen
  8. Listen
  9.  
  10. <VirtualHost *:>
  11. WSGIDaemonProcess keystone-public processes= threads= user=keystone group=keystone display-name=%{GROUP}
  12. WSGIProcessGroup keystone-public
  13. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  14. WSGIApplicationGroup %{GLOBAL}
  15. WSGIPassAuthorization On
  16. ErrorLogFormat "%{cu}t %M"
  17. ErrorLog /var/log/apache2/keystone.log
  18. CustomLog /var/log/apache2/keystone_access.log combined
  19.  
  20. <Directory /usr/bin>
  21. Require all granted
  22. </Directory>
  23. </VirtualHost>
  24.  
  25. <VirtualHost *:>
  26. WSGIDaemonProcess keystone-admin processes= threads= user=keystone group=keystone display-name=%{GROUP}
  27. WSGIProcessGroup keystone-admin
  28. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
  29. WSGIApplicationGroup %{GLOBAL}
  30. WSGIPassAuthorization On
  31. ErrorLogFormat "%{cu}t %M"
  32. ErrorLog /var/log/apache2/keystone.log
  33. CustomLog /var/log/apache2/keystone_access.log combined
  34.  
  35. <Directory /usr/bin>
  36. Require all granted
  37. </Directory>
  38. </VirtualHost>
  39. ~

开启认证服务虚拟主机

  1. root@controller:~# ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled

重启apache

  1. root@controller:~# service apache2 restart
  2. * Restarting web server apache2 [ OK ]
  3. root@controller:~# rm -rf /var/lib/keystone/keystone.db
  4. root@controller:~# lsof -i:
  5. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  6. apache2 root 6u IPv6 0t0 TCP *: (LISTEN)
  7. apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
  8. apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
  9. root@controller:~# lsof -i:
  10. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  11. apache2 root 8u IPv6 0t0 TCP *: (LISTEN)
  12. apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)
  13. apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)

安装python-openstackclient

  1. root@controller:~# apt-get install -y python-openstackclient

配置rootrc环境

  1. root@controller:~# vim rootrc
  2. root@controller:~# cat rootrc
  3. export OS_TOKEN=
  4. export OS_URL=http://controller:35357/v3
  5. export OS_IDENTITY_API_VERSION=
  6. export PS1="rootrc@\u@\h:\w\$"
  7.  
  8. # 加载rootrc环境
  9. root@controller:~# source rootrc

向keystone中注册服务

值得注意的是:35357一般为管理员登录使用,5000端口一般发布到外部用户使用

创建服务实体和API端点

  1. adminrc@root@controller:~$source rootrc
  2. rootrc@root@controller:~$openstack service create --name keystone --description "OpenStack Identify" identity
  3. +-------------+----------------------------------+
  4. | Field | Value |
  5. +-------------+----------------------------------+
  6. | description | OpenStack Identify |
  7. | enabled | True |
  8. | id | 7052e2715c874ae18dc520ec21026a34 |
  9. | name | keystone |
  10. | type | identity |
  11. +-------------+----------------------------------+
  12. rootrc@root@controller:~$openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
  13. +--------------+----------------------------------+
  14. | Field | Value |
  15. +--------------+----------------------------------+
  16. | enabled | True |
  17. | id | ac731860b374450484034b024e643004 |
  18. | interface | internal |
  19. | region | RegionOne |
  20. | region_id | RegionOne |
  21. | service_id | 7052e2715c874ae18dc520ec21026a34 |
  22. | service_name | keystone |
  23. | service_type | identity |
  24. | url | http://controller:5000/v3 |
  25. +--------------+----------------------------------+
  26. rootrc@root@controller:~$openstack endpoint create --region RegionOne identity public http://controller:5000/v3
  27. +--------------+----------------------------------+
  28. | Field | Value |
  29. +--------------+----------------------------------+
  30. | enabled | True |
  31. | id | d1f7296477a748ef82ad4970580d50b2 |
  32. | interface | public |
  33. | region | RegionOne |
  34. | region_id | RegionOne |
  35. | service_id | 7052e2715c874ae18dc520ec21026a34 |
  36. | service_name | keystone |
  37. | service_type | identity |
  38. | url | http://controller:5000/v3 |
  39. +--------------+----------------------------------+
  40. rootrc@root@controller:~$openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
  41. +--------------+----------------------------------+
  42. | Field | Value |
  43. +--------------+----------------------------------+
  44. | enabled | True |
  45. | id | df4eb1f2b08f474fa7b83ef979ebd0fb |
  46. | interface | admin |
  47. | region | RegionOne |
  48. | region_id | RegionOne |
  49. | service_id | 7052e2715c874ae18dc520ec21026a34 |
  50. | service_name | keystone |
  51. | service_type | identity |
  52. | url | http://controller:35357/v3 |
  53. +--------------+----------------------------------+

接着创建域、项目、用户和角色

  1. rootrc@root@controller:~$openstack domain create --description "Default Domain" default
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Default Domain |
  6. | enabled | True |
  7. | id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  8. | name | default |
  9. +-------------+----------------------------------+
  10. rootrc@root@controller:~$openstack project create --domain default --description "Admin Project" admin
  11. +-------------+----------------------------------+
  12. | Field | Value |
  13. +-------------+----------------------------------+
  14. | description | Admin Project |
  15. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  16. | enabled | True |
  17. | id | 29577090a0e8466ab49cc30a4305f5f8 |
  18. | is_domain | False |
  19. | name | admin |
  20. | parent_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  21. +-------------+----------------------------------+
  22. rootrc@root@controller:~$openstack user create --domain default --password admin admin
  23. +-----------+----------------------------------+
  24. | Field | Value |
  25. +-----------+----------------------------------+
  26. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  27. | enabled | True |
  28. | id | 653177098fac40a28734093706299e66 |
  29. | name | admin |
  30. +-----------+----------------------------------+
  31. rootrc@root@controller:~$openstack role create admin
  32. +-----------+----------------------------------+
  33. | Field | Value |
  34. +-----------+----------------------------------+
  35. | domain_id | None |
  36. | id | 6abd897a6f134b8ea391377d1617a2f8 |
  37. | name | admin |
  38. +-----------+----------------------------------+
  39. rootrc@root@controller:~$openstack role add --project admin --user admin admin
  40. rootrc@root@controller:~$ #没有提示就是最好的提示了

创建service项目

  1. rootrc@root@controller:~$openstack project create --domain default --description "Service Project" service
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Service Project |
  6. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  7. | enabled | True |
  8. | id | 006a1ed36a0e4cbd8947d853b79d522c |
  9. | is_domain | False |
  10. | name | service |
  11. | parent_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  12. +-------------+----------------------------------+
  13. rootrc@root@controller:~$openstack project create --domain default --description "Demo Project" demo
  14. +-------------+----------------------------------+
  15. | Field | Value |
  16. +-------------+----------------------------------+
  17. | description | Demo Project |
  18. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  19. | enabled | True |
  20. | id | ffc560f6a2604c3896df922115c6fc2a |
  21. | is_domain | False |
  22. | name | demo |
  23. | parent_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  24. +-------------+----------------------------------+
  25. rootrc@root@controller:~$openstack user create --domain default --password demo demo
  26. +-----------+----------------------------------+
  27. | Field | Value |
  28. +-----------+----------------------------------+
  29. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  30. | enabled | True |
  31. | id | c4de9fac882740838aa26e9119b30cb9 |
  32. | name | demo |
  33. +-----------+----------------------------------+
  34. rootrc@root@controller:~$openstack role create user
  35. +-----------+----------------------------------+
  36. | Field | Value |
  37. +-----------+----------------------------------+
  38. | domain_id | None |
  39. | id | e69817f50d6448fe888a64e51e025351 |
  40. | name | user |
  41. +-----------+----------------------------------+
  42. rootrc@root@controller:~$openstack role add --project demo --user demo user
  43. rootrc@root@controller:~$echo $?

验证adminrc

  1. rootrc@root@controller:~$vim adminrc
  2. rootrc@root@controller:~$cat adminrc
  3. unset OS_TOKEN
  4. unset OS_URL
  5. unset OS_IDENTITY_API_VERSION
  6.  
  7. export OS_PROJECT_DOMAIN_NAME=default
  8. export OS_USER_DOMAIN_NAME=default
  9. export OS_PROJECT_NAME=admin
  10. export OS_USERNAME=admin
  11. export OS_PASSWORD=admin
  12. export OS_AUTH_URL=http://controller:35357/v3
  13. export OS_IDENTITY_API_VERSION=
  14. export OS_IMAGE_API_VERSION=
  15. export PS1="adminrc@\u@\h:\w\$"

加载adminrc环境并尝试获取keystone token

  1. rootrc@root@controller:~$source adminrc
  2. adminrc@root@controller:~$openstack token issue
  3. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  4. | Field | Value |
  5. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  6. | expires | --14T21::.000000Z |
  7. | id | gAAAAABcPPIQK270ipb9EgRW7feWYLunIVPaX9cTjhvgvTvMmpG8j8K_AkwPv5UL4WUFFzfDnO30A7WflnaOyufilAi7DCmbQ2YLlsGuAzgbCRYooV5pIJTkuqbhmRJDmFX068zliOri_rXL2CsTq9um3UtCPnOj7-7LxmXcFm5LwsP6OyzY4Ts |
  8. | project_id | 29577090a0e8466ab49cc30a4305f5f8 |
  9. | user_id | 653177098fac40a28734093706299e66 |
  10. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  11. adminrc@root@controller:~$date
  12. Tue Jan :: CST

验证demorc

  1. adminrc@root@controller:~$vim demorc
  2. adminrc@root@controller:~$cat demorc
  3. unset OS_TOKEN
  4. unset OS_URL
  5. unset OS_IDENTITY_API_VERSION
  6.  
  7. export OS_PROJECT_DOMAIN_NAME=default
  8. export OS_USER_DOMAIN_NAME=default
  9. export OS_PROJECT_NAME=demo
  10. export OS_USERNAME=demo
  11. export OS_PASSWORD=demo
  12. export OS_AUTH_URL=http://controller:5000/v3
  13. export OS_IDENTITY_API_VERSION=
  14. export OS_IMAGE_API_VERSION=
  15. export PS1="demorc@\u@\h:\w\$"

获取demo用户的token

  1. adminrc@root@controller:~$source demorc
  2. demorc@root@controller:~$openstack token issue
  3. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  4. | Field | Value |
  5. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  6. | expires | --14T21::.000000Z |
  7. | id | gAAAAABcPPPSLXi6E581bb8P0MpmHOLg-p0_vt9YLNWXn6feHLF6QONWq3Ny8JT4ceOvkKiv5TltLA4WRyn6XghcvZn-X0tuhOl07Eh6KXxGiGtEwgZyPFO-AFhykXims1FH0Tz4lp-fI_ExelOAcT50OFeKC3bB5vlGlYgR0pmdiVj8L73Boiw |
  8. | project_id | ffc560f6a2604c3896df922115c6fc2a |
  9. | user_id | c4de9fac882740838aa26e9119b30cb9 |
  10. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  11. demorc@root@controller:~$date
  12. Tue Jan :: CST

开始安装glance服务

  1. demorc@root@controller:~$mysql -uroot -p123456
  2. Welcome to the MariaDB monitor. Commands end with ; or \g.
  3. Your MariaDB connection id is
  4. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  5.  
  6. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  7.  
  8. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  9.  
  10. MariaDB [(none)]> create database glance;
  11. Query OK, row affected (0.00 sec)
  12.  
  13. MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by '';
  14. Query OK, rows affected (0.00 sec)
  15.  
  16. MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by '';
  17. Query OK, rows affected (0.00 sec)
  18.  
  19. MariaDB [(none)]> \q
  20. Bye
  21. demorc@root@controller:~$source adminrc
  22. adminrc@root@controller:~$

1111

  1. rootrc@root@controller:~$source adminrc
  2. adminrc@root@controller:~$openstack service create --name glance --description "OpenStack Image" image
  3. +-------------+----------------------------------+
  4. | Field | Value |
  5. +-------------+----------------------------------+
  6. | description | OpenStack Image |
  7. | enabled | True |
  8. | id | 24eba17c530946fea53413104b8d2035 |
  9. | name | glance |
  10. | type | image |
  11. +-------------+----------------------------------+
  12. adminrc@root@controller:~$ps -aux | grep -v "grep" | grep keystone
  13. keystone 0.0 0.2 ? Sl : : (wsgi:keystone-pu -k start
  14. keystone 0.0 3.0 ? Sl : : (wsgi:keystone-pu -k start
  15. keystone 0.0 2.1 ? Sl : : (wsgi:keystone-pu -k start
  16. keystone 0.0 0.2 ? Sl : : (wsgi:keystone-pu -k start
  17. keystone 0.0 0.2 ? Sl : : (wsgi:keystone-pu -k start
  18. keystone 0.0 3.1 ? Sl : : (wsgi:keystone-ad -k start
  19. keystone 0.0 3.0 ? Sl : : (wsgi:keystone-ad -k start
  20. keystone 0.0 2.2 ? Sl : : (wsgi:keystone-ad -k start
  21. keystone 0.0 3.1 ? Sl : : (wsgi:keystone-ad -k start
  22. keystone 0.0 3.1 ? Sl : : (wsgi:keystone-ad -k start
  23. adminrc@root@controller:~$lsof -i:
  24. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  25. apache2 root 6u IPv6 0t0 TCP *: (LISTEN)
  26. apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
  27. apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
  28. adminrc@root@controller:~$lsof -i:
  29. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  30. apache2 root 8u IPv6 0t0 TCP *: (LISTEN)
  31. apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)
  32. apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)
  33. adminrc@root@controller:~$tail /var/log/keystone/keystone-wsgi-admin.log

11111

  1. adminrc@root@controller:~$openstack endpoint create --region RegionOne image internal http://controller:9292
  2. +--------------+----------------------------------+
  3. | Field | Value |
  4. +--------------+----------------------------------+
  5. | enabled | True |
  6. | id | 83d13b44fbae4abbb89b7f1a9f1519d6 |
  7. | interface | internal |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 24eba17c530946fea53413104b8d2035 |
  11. | service_name | glance |
  12. | service_type | image |
  13. | url | http://controller:9292 |
  14. +--------------+----------------------------------+
  15. adminrc@root@controller:~$openstack endpoint create --region RegionOne image admin http://controller:9292
  16. +--------------+----------------------------------+
  17. | Field | Value |
  18. +--------------+----------------------------------+
  19. | enabled | True |
  20. | id | c9708f196a6946f987652cb40b9a8aea |
  21. | interface | admin |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | 24eba17c530946fea53413104b8d2035 |
  25. | service_name | glance |
  26. | service_type | image |
  27. | url | http://controller:9292 |
  28. +--------------+----------------------------------+

111

  1. adminrc@root@controller:~$openstack user create --domain default --password glance glance
  2. +-----------+----------------------------------+
  3. | Field | Value |
  4. +-----------+----------------------------------+
  5. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  6. | enabled | True |
  7. | id | b9c7a987bc494e72899d6ffa7c68c3d0 |
  8. | name | glance |
  9. +-----------+----------------------------------+
  10. adminrc@root@controller:~$openstack role add --project service --user glance admin
  11. adminrc@root@controller:~$sudo -s
  12. root@controller:~# apt-get install -y glance
  13. root@controller:~# echo $?

配置glance-api.conf

  1. root@controller:~# cp /etc/glance/glance-api.conf{,.bak}
  2. root@controller:~# vim /etc/glance/glance-api.conf
  3. ......
  4. connection = mysql+pymysql://glance:123456@controller/glance
  5. ......
  6. [keystone_authtoken]
  7.  
  8. auth_uri = http://controller:5000
  9. auth_url = http://controller:35357
  10. memcached_servers = controller:
  11. auth_type = password
  12. project_domain_name = default
  13. user_domain_name = default
  14. project_name = service
  15. username = glance
  16. password = glance
  17.  
  18. [paste_deploy]
  19. flavor = keystone
  20.  
  21. [glance_store]
  22.  
  23. stores = file,http
  24. default_store = file
  25. filesystem_store_datadir = /var/lib/glance/images/

配置

  1. root@controller:~# cp /etc/glance/glance-registry.conf{,.bak}
  2. root@controller:~# vim /etc/glance/glance-registry.conf
  3. .......
  4. connection = mysql+pymysql://glance:123456@localhost/glance
  5. .......
  6. [keystone_authtoken]
  7.  
  8. auth_uri = http://controller:5000
  9. auth_url = http://controller:35357
  10. memcached_servers = controller:
  11. auth_type = password
  12. project_domain_name = default
  13. user_domain_name = default
  14. project_name = service
  15. username = glance
  16. password = glance
  17. ........
  18. [paste_deploy]
  19.  
  20. flavor = keystone

写入镜像服务数据库中

  1. root@controller:~# su -s /bin/sh -c "glance-manage db_sync" glance
  2. ............
  3. -- ::43.570 INFO migrate.versioning.api [-] done

配置完成重启服务

  1. root@controller:~# service glance-registry restart
  2. glance-registry stop/waiting
  3. glance-registry start/running, process
  4. root@controller:~# service glance-api restart
  5. glance-api stop/waiting
  6. glance-api start/running, process

获取admin凭证来获取只有管理员能执行的命令的访问权限

  1. root@controller:~# source adminrc
  2. adminrc@root@controller:~$wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  3. adminrc@root@controller:~$ls -al cirros-0.3.-x86_64-disk.img
  4. -rw-r--r-- root root May cirros-0.3.-x86_64-disk.img
  5. adminrc@root@controller:~$file cirros-0.3.-x86_64-disk.img
  6. cirros-0.3.-x86_64-disk.img: QEMU QCOW Image (v2), bytes
  7. adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.-x86_64-disk.img --disk-format qcow2 --container-format bare --public
  8. +------------------+------------------------------------------------------+
  9. | Field | Value |
  10. +------------------+------------------------------------------------------+
  11. | checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
  12. | container_format | bare |
  13. | created_at | --14T22::08Z |
  14. | disk_format | qcow2 |
  15. | file | /v2/images/39d73bcf-e60b-4caf--cca17de00d7e/file |
  16. | id | 39d73bcf-e60b-4caf--cca17de00d7e |
  17. | min_disk | |
  18. | min_ram | |
  19. | name | cirrors |
  20. | owner | 29577090a0e8466ab49cc30a4305f5f8 |
  21. | protected | False |
  22. | schema | /v2/schemas/image |
  23. | size | |
  24. | status | active |
  25. | tags | |
  26. | updated_at | --14T22::08Z |
  27. | virtual_size | None |
  28. | visibility | public |
  29. +------------------+------------------------------------------------------+

查看镜像列表

  1. adminrc@root@controller:~$openstack image list
  2. +--------------------------------------+---------+--------+
  3. | ID | Name | Status |
  4. +--------------------------------------+---------+--------+
  5. | 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
  6. +--------------------------------------+---------+--------+

也可以直接去机器上glance对应的的images目录下查看

  1. adminrc@root@controller:~$ls /var/lib/glance/images/
  2. 39d73bcf-e60b-4caf--cca17de00d7e

遇到的问题

错误信息

  1. adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.-x86_64-disk.img --disk-format qcow2 --container-format bare --public
  2. Service Unavailable: The server is currently unavailable. Please try again at a later time. (HTTP )
  3. adminrc@root@controller:~$cd /var/log/glance/
  4. adminrc@root@controller:/var/log/glance$ls
  5. glance-api.log glance-registry.log
  6. adminrc@root@controller:/var/log/glance$tail glance-api.log
  7. -- ::06.887 INFO glance.common.wsgi [-] Started child
  8. -- ::06.889 INFO eventlet.wsgi.server [-] () wsgi starting up on http://0.0.0.0:9292
  9. -- ::59.019 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  10. -- ::59.071 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  11. -- ::59.071 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data
  12. -- ::59.078 INFO eventlet.wsgi.server [-] 10.0.3.10 - - [/Jan/ ::] "GET /v2/schemas/image HTTP/1.1" 0.170589
  13. -- ::01.259 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  14. -- ::01.301 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  15. -- ::01.302 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data
  16. -- ::01.306 INFO eventlet.wsgi.server [-] 10.0.3.10 - - [/Jan/ ::] "GET /v2/schemas/image HTTP/1.1" 0.089388
  17. adminrc@root@controller:/var/log/glance$grep -rHn "ERROR"
  18. adminrc@root@controller:/var/log/glance$grep -rHn "error"
  19. glance-api.log::-- ::59.019 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  20. glance-api.log::-- ::59.071 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  21. glance-api.log::-- ::01.259 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  22. glance-api.log::-- ::01.301 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
  23. adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.-x86_64-disk.img --disk-format qcow2 --container-format bare --public
  24. Service Unavailable: The server is currently unavailable. Please try again at a later time. (HTTP )
  25. adminrc@root@controller:~$tail /var/log/keystone/keystone-wsgi-admin.log
  26. -- ::32.353 INFO keystone.token.providers.fernet.utils [req-749b2de5-d2be-47e8--083c54fe488d - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
  27. -- ::32.358 INFO keystone.common.wsgi [req-62e3bb30-ef7b-476a-8f49-dc062c1a9452 - - - - -] POST http://controller:35357/v3/auth/tokens
  28. -- ::32.552 INFO keystone.token.providers.fernet.utils [req-62e3bb30-ef7b-476a-8f49-dc062c1a9452 - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
  29. -- ::32.561 INFO keystone.token.providers.fernet.utils [req-2540636c-0a56--adbc-deeaf0063210 - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
  30. -- ::32.682 INFO keystone.common.wsgi [req-2540636c-0a56--adbc-deeaf0063210 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services/image
  31. -- ::32.686 WARNING keystone.common.wsgi [req-2540636c-0a56--adbc-deeaf0063210 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] Could not find service: image
  32. -- ::32.691 INFO keystone.token.providers.fernet.utils [req-c4a9af14-d206--a693-23055fcb16e3 - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
  33. -- ::32.807 INFO keystone.common.wsgi [req-c4a9af14-d206--a693-23055fcb16e3 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services?name=image
  34. -- ::32.816 INFO keystone.token.providers.fernet.utils [req-cc99a9ba-db21--9c32-4eb39b931efa - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
  35. -- ::32.939 INFO keystone.common.wsgi [req-cc99a9ba-db21--9c32-4eb39b931efa 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services?type=image

解决办法

  1. glance-api.confglance-registry.conf文件中
  2. [keystone_authtoken]
  3. username = glance
  4. password =
  5. 这里跟glance数据库密码搞混了,应该是glance
  6. 因为上面这条命令openstack user create --domain default --password glance glance

安装nova

  1. MariaDB [(none)]> create database nova_api;
  2. Query OK, row affected (0.00 sec)
  3.  
  4. MariaDB [(none)]> create database nova;
  5. Query OK, row affected (0.00 sec)
  6.  
  7. MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by '';
  8. Query OK, rows affected (0.00 sec)
  9.  
  10. MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by '';
  11. Query OK, rows affected (0.00 sec)
  12.  
  13. MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by '';
  14. Query OK, rows affected (0.00 sec)
  15.  
  16. MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by '';
  17. Query OK, rows affected (0.00 sec)
  18.  
  19. MariaDB [(none)]> \q
  20. Bye

创建nova用户

  1. adminrc@root@controller:~$openstack user create --domain default --password nova nova
  2. +-----------+----------------------------------+
  3. | Field | Value |
  4. +-----------+----------------------------------+
  5. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  6. | enabled | True |
  7. | id | e4fc73ea1f6d47269ae4ab95ff999326 |
  8. | name | nova |
  9. +-----------+----------------------------------+
  10. nova用户添加admin角色
  11. adminrc@root@controller:~$openstack role add --project service --user nova admin

创建nova服务实体

  1. adminrc@root@controller:~$openstack role add --project service --user nova admin
  2. adminrc@root@controller:~$openstack service create --name nova --description "OpenStack Compute" compute
  3. +-------------+----------------------------------+
  4. | Field | Value |
  5. +-------------+----------------------------------+
  6. | description | OpenStack Compute |
  7. | enabled | True |
  8. | id | 872de5b67b1547adb4826ca1f7ef96b3 |
  9. | name | nova |
  10. | type | compute |
  11. +-------------+----------------------------------+

创建compute服务api端点

  1. adminrc@root@controller:~$openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
  2. +--------------+-------------------------------------------+
  3. | Field | Value |
  4. +--------------+-------------------------------------------+
  5. | enabled | True |
  6. | id | 8e42256f67e446cc88568903286ed462 |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 872de5b67b1547adb4826ca1f7ef96b3 |
  11. | service_name | nova |
  12. | service_type | compute |
  13. | url | http://controller:8774/v2.1/%(tenant_id)s |
  14. +--------------+-------------------------------------------+
  15.  
  16. adminrc@root@controller:~$openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
  17. +--------------+-------------------------------------------+
  18. | Field | Value |
  19. +--------------+-------------------------------------------+
  20. | enabled | True |
  21. | id | b07f3be5fff4444db57323bb04376d33 |
  22. | interface | internal |
  23. | region | RegionOne |
  24. | region_id | RegionOne |
  25. | service_id | 872de5b67b1547adb4826ca1f7ef96b3 |
  26. | service_name | nova |
  27. | service_type | compute |
  28. | url | http://controller:8774/v2.1/%(tenant_id)s |
  29. +--------------+-------------------------------------------+
  30. adminrc@root@controller:~$openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
  31. +--------------+-------------------------------------------+
  32. | Field | Value |
  33. +--------------+-------------------------------------------+
  34. | enabled | True |
  35. | id | 91dc56e437e640c397696318ee1dcc21 |
  36. | interface | admin |
  37. | region | RegionOne |
  38. | region_id | RegionOne |
  39. | service_id | 872de5b67b1547adb4826ca1f7ef96b3 |
  40. | service_name | nova |
  41. | service_type | compute |
  42. | url | http://controller:8774/v2.1/%(tenant_id)s |
  43. +--------------+-------------------------------------------+

安装nova组件包

  1. adminrc@root@controller:~$apt-get install -y nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler

配置

  1. adminrc@root@controller:~$cp /etc/nova/nova.conf{,.bak}
  2. adminrc@root@controller:~$vim /etc/nova/nova.conf
  3. [DEFAULT]
  4. ........
  5. rpc_backend=rabbit
  6. auth_strategy=keystone
  7. my_ip=10.0.3.10
  8. use_neutron=True
  9. firewall_driver=nova.virt.firewall.NoopFirewallDriver
  10.  
  11. [database]
  12. connection=mysql+pymysql://nova:123456@controller/nova
  13.  
  14. [api_database]
  15. connection=mysql+pymysql://nova:123456@controller/nova_api
  16.  
  17. [oslo_messaging_rabbit]
  18. rabbit_host = controller
  19. rabbit_userid = openstack
  20. rabbit_password =
  21.  
  22. [keystone_authtoken]
  23. auth_uri = http://controller:5000
  24. auth_url = http://controller:35357
  25. memcached_servers = controller:
  26. auth_type = password
  27. project_domain_name = default
  28. user_domain_name = default
  29. project_name = service
  30. username = nova
  31. password = nova
  32.  
  33. [vnc]
  34. vncserver_listen = 0.0.0.0
  35. vncserver_proxyclient_address = 0.0.0.0
  36.  
  37. [oslo_concurrency]
  38. lock_path = /var/lib/nova/tmp

同步数据库

  1. adminrc@root@controller:~$su -s /bin/sh -c "nova-manage api_db sync" nova
  2. Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
  3. Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
  4. ...........
  5. -- ::43.731 INFO migrate.versioning.api [-] done
  6. adminrc@root@controller:~$echo $?
  7.  
  8. adminrc@root@controller:~$su -s /bin/sh -c "nova-manage db sync" nova
  9. .......
  10. -- ::19.955 INFO migrate.versioning.api [-] done
  11. adminrc@root@controller:~$echo $?

重启服务

  1. adminrc@root@controller:~$service nova-api restart
  2. nova-api stop/waiting
  3. nova-api start/running, process
  4. adminrc@root@controller:~$service nova-consoleauth restart
  5. nova-consoleauth stop/waiting
  6. nova-consoleauth start/running, process
  7. adminrc@root@controller:~$service nova-scheduler restart
  8. nova-scheduler stop/waiting
  9. nova-scheduler start/running, process
  10. adminrc@root@controller:~$service nova-conductor restart
  11. nova-conductor stop/waiting
  12. nova-conductor start/running, process
  13. adminrc@root@controller:~$service nova-novncproxy restart
  14. nova-novncproxy stop/waiting
  15. nova-novncproxy start/running, process

查看服务是否启动起来

  1. adminrc@root@controller:/var/log/nova$openstack compute service list
  2. +----+------------------+------------+----------+---------+-------+----------------------------+
  3. | Id | Binary | Host | Zone | Status | State | Updated At |
  4. +----+------------------+------------+----------+---------+-------+----------------------------+
  5. | | nova-consoleauth | controller | internal | enabled | up | --14T23::50.000000 |
  6. | | nova-scheduler | controller | internal | enabled | up | --14T23::46.000000 |
  7. | | nova-conductor | controller | internal | enabled | up | --14T23::49.000000 |
  8. +----+------------------+------------+----------+---------+-------+----------------------------+

安装nova-compute节点,因为这里是单节点安装,所以nova-compute也是安装在controller节点上

  1. adminrc@root@controller:~$apt-get install nova-compute

重新配置nova.conf

  1. adminrc@root@controller:~$cp /etc/nova/nova.conf{,.back}
  2. adminrc@root@controller:~$vim /etc/nova/nova.conf #其他项保持不变
  3. [vnc]
  4. enabled = True
  5. vncserver_listen = 0.0.0.0
  6. vncserver_proxyclient_address = $my_ip
  7. novncproxy_base_url = http://192.168.56.10:6080/vnc_auto.html

确定计算节点是否支持虚拟机硬件加速

  1. adminrc@root@controller:~$egrep -c '(vmx|svm)' /proc/cpuinfo
  2.  
  3. # 不支持
  4. 需要更改nova-compute.conf文件
  5. adminrc@root@controller:~$cp /etc/nova/nova-compute.conf{,.bak}
  6. adminrc@root@controller:~$vim /etc/nova/nova-compute.conf
  7. [libvirt]
  8. virt_type=qemu #原值是kvm
  9. 重启计算服务
  10. adminrc@root@controller:~$service nova-compute restart
  11. nova-compute stop/waiting
  12. nova-compute start/running, process
  13. adminrc@root@controller:~$openstack compute service list
  14. +----+------------------+------------+----------+---------+-------+----------------------------+
  15. | Id | Binary | Host | Zone | Status | State | Updated At |
  16. +----+------------------+------------+----------+---------+-------+----------------------------+
  17. | | nova-consoleauth | controller | internal | enabled | up | --15T00::51.000000 |
  18. | | nova-scheduler | controller | internal | enabled | up | --15T00::57.000000 |
  19. | | nova-conductor | controller | internal | enabled | up | --15T00::50.000000 |
  20. | | nova-compute | controller | nova | enabled | up | --15T00::54.000000 |
  21. +----+------------------+------------+----------+---------+-------+----------------------------+
  22. 如果查看nova-api服务的话,需要
  23. adminrc@root@controller:~$service nova-api status
  24. nova-api start/running, process

安装网络neutron服务

  1. MariaDB [(none)]> create database neutron;
  2. Query OK, row affected (0.00 sec)
  3.  
  4. MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by '';
  5. Query OK, rows affected (0.00 sec)
  6.  
  7. MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by '';
  8. Query OK, rows affected (0.00 sec)
  9.  
  10. MariaDB [(none)]> \q

创建neutron用户

  1. adminrc@root@controller:~$openstack user create --domain default --password neutron neutron
  2. +-----------+----------------------------------+
  3. | Field | Value |
  4. +-----------+----------------------------------+
  5. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  6. | enabled | True |
  7. | id | 081dc309806c45198a3bd6c39bf9947f |
  8. | name | neutron |
  9. +-----------+----------------------------------+
  10. adminrc@root@controller:~$openstack role add --project service --user neutron admin
  11. adminrc@root@controller:~$

创建neutron服务实体

  1. adminrc@root@controller:~$openstack service create --name neutron --description "OpenStack Networking" network
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | OpenStack Networking |
  6. | enabled | True |
  7. | id | c661b602f11d45cfb068027c77fd519e |
  8. | name | neutron |
  9. | type | network |
  10. +-------------+----------------------------------+

创建neutron服务端点

  1. adminrc@root@controller:~$openstack endpoint create --region RegionOne network public http://controller:9696
  2. +--------------+----------------------------------+
  3. | Field | Value |
  4. +--------------+----------------------------------+
  5. | enabled | True |
  6. | id | 0192ba47a7b348ec88bb5f71c82f8f4c |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | c661b602f11d45cfb068027c77fd519e |
  11. | service_name | neutron |
  12. | service_type | network |
  13. | url | http://controller:9696 |
  14. +--------------+----------------------------------+
  15. adminrc@root@controller:~$openstack endpoint create --region RegionOne network internal http://controller:9696
  16. +--------------+----------------------------------+
  17. | Field | Value |
  18. +--------------+----------------------------------+
  19. | enabled | True |
  20. | id | bdf4b9663ccb4ef695cde0638231943a |
  21. | interface | internal |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | c661b602f11d45cfb068027c77fd519e |
  25. | service_name | neutron |
  26. | service_type | network |
  27. | url | http://controller:9696 |
  28. +--------------+----------------------------------+
  29. adminrc@root@controller:~$openstack endpoint create --region RegionOne network admin http://controller:9696
  30. +--------------+----------------------------------+
  31. | Field | Value |
  32. +--------------+----------------------------------+
  33. | enabled | True |
  34. | id | ffc7a793985e494fa839fd76ea5bdcef |
  35. | interface | admin |
  36. | region | RegionOne |
  37. | region_id | RegionOne |
  38. | service_id | c661b602f11d45cfb068027c77fd519e |
  39. | service_name | neutron |
  40. | service_type | network |
  41. | url | http://controller:9696 |
  42. +--------------+----------------------------------+

配置网络选项,网络选项有两种:

1.公共网络

2.私有网络

对于公共网络

首先安装安全组件

  1. adminrc@root@controller:~$apt-get install -y neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
  1. adminrc@root@controller:~$cp /etc/neutron/neutron.conf{,.bak}
  2. adminrc@root@controller:~$vim /etc/neutron/neutron.conf
  3. #需要更改的地方
  4. [database]
  5. connection = mysql+pymysql://neutron:123456@controller/neutron
  6.  
  7. [DEFAULT]
  8. rpc_backend = rabbit
  9. core_plugin = ml2
  10. service_plugins =
  11. auth_strategy = keystone
  12. notify_nova_on_port_status_changes = True
  13. notify_nova_on_port_data_changes = True
  14.  
  15. [oslo_messaging_rabbit]
  16.  
  17. rabbit_host = controller
  18. rabbit_userid = openstack
  19. rabbit_password = RABBIT_PASS
  20.  
  21. [keystone_authtoken]
  22.  
  23. auth_uri = http://controller:5000
  24. auth_url = http://controller:35357
  25. memcached_servers = controller:
  26. auth_type = password
  27. project_domain_name = default
  28. user_domain_name = default
  29. project_name = service
  30. username = neutron
  31. password = neutron
  32.  
  33. [nova]
  34.  
  35. auth_url = http://controller:35357
  36. auth_type = password
  37. project_domain_name = default
  38. user_domain_name = default
  39. region_name = RegionOne
  40. project_name = service
  41. username = nova
  42. password = nova

配置ML2插件

  1. adminrc@root@controller:~$cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
  2. adminrc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
  3. # 需要更改的项
  4. [ml2]
  5.  
  6. type_drivers = flat,vlan
  7. tenant_network_types =
  8. mechanism_drivers = linuxbridge
  9. extension_drivers = port_security
  10.  
  11. [ml2_type_flat]
  12.  
  13. flat_networks = provider
  14.  
  15. [securitygroup]
  16.  
  17. enable_ipset = True

配置linuxbridge.ini

  1. adminrc@root@controller:~$cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
  2. adminrc@root@controller:~$vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  3. [linux_bridge]
  4. physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
  5.  
  6. [vxlan]
  7. enable_vxlan = False
  8.  
  9. [securitygroup]
  10.  
  11. enable_security_group = True
  12. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置dhcp_agent.ini

  1. adminrc@root@controller:~$cp /etc/neutron/dhcp_agent.ini{,.bak}
  2. adminrc@root@controller:~$vim /etc/neutron/dhcp_agent.ini
  3. interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  4. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  5. enable_isolated_metadata = True

配置元数据代理

  1. adminrc@root@controller:~$cp /etc/neutron/metadata_agent.ini{,.bak}
  2. adminrc@root@controller:~$vim /etc/neutron/metadata_agent.ini
  3. [DEFAULT]
  4.  
  5. nova_metadata_ip = controller
  6. metadata_proxy_shared_secret = METADATA_SECRET

配置计算节点网络服务

  1. adminrc@root@controller:~$vim /etc/nova/nova.conf
  2. [neutron] 末尾添加这些内容
  3. url = http://controller:9696
  4. auth_url = http://controller:35357
  5. auth_type = password
  6. project_domain_name = default
  7. user_domain_name = default
  8. region_name = RegionOne
  9. project_name = service
  10. username = neutron
  11. password = NEUTRON_PASS
  12.  
  13. service_metadata_proxy = True
  14. metadata_proxy_shared_secret = METADATA_SECRET

同步数据库

  1. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  2. --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算API服务、Networking服务

  1. adminrc@root@controller:~$service nova-api restart
  2. nova-api stop/waiting
  3. nova-api start/running, process
  4. adminrc@root@controller:~$service neutron-server restart
  5. neutron-server stop/waiting
  6. neutron-server start/running, process
  7. adminrc@root@controller:~$service neutron-server restart
  8. neutron-server stop/waiting
  9. neutron-server start/running, process
  10. adminrc@root@controller:~$service neutron-linuxbridge-agent restart
  11. neutron-linuxbridge-agent stop/waiting
  12. neutron-linuxbridge-agent start/running, process
  13. adminrc@root@controller:~$service neutron-dhcp-agent restart
  14. neutron-dhcp-agent stop/waiting
  15. neutron-dhcp-agent start/running, process
  16. adminrc@root@controller:~$service neutron-metadata-agent restart
  17. neutron-metadata-agent stop/waiting
  18. neutron-metadata-agent start/running, process

重启neutron-l3-agent

  1. adminrc@root@controller:~$service neutron-l3-agent restart
  2. neutron-l3-agent stop/waiting
  3. neutron-l3-agent start/running, process

重启

  1. adminrc@root@controller:~$service nova-compute restart
  2. nova-compute stop/waiting
  3. nova-compute start/running, process
  4. adminrc@root@controller:~$service neutron-linuxbridge-agent restart
  5. neutron-linuxbridge-agent stop/waiting
  6. neutron-linuxbridge-agent start/running, process

查看是否有网络创建

  1. adminrc@root@controller:~$openstack network list

输出为空,因为还没有创建任何网络

验证neutron-server是否正常启动

  1. adminrc@root@controller:~$neutron ext-list
  2. +---------------------------+-----------------------------------------------+
  3. | alias | name |
  4. +---------------------------+-----------------------------------------------+
  5. | default-subnetpools | Default Subnetpools |
  6. | availability_zone | Availability Zone |
  7. | network_availability_zone | Network Availability Zone |
  8. | auto-allocated-topology | Auto Allocated Topology Services |
  9. | binding | Port Binding |
  10. | agent | agent |
  11. | subnet_allocation | Subnet Allocation |
  12. | dhcp_agent_scheduler | DHCP Agent Scheduler |
  13. | tag | Tag support |
  14. | external-net | Neutron external network |
  15. | net-mtu | Network MTU |
  16. | network-ip-availability | Network IP Availability |
  17. | quotas | Quota management support |
  18. | provider | Provider Network |
  19. | multi-provider | Multi Provider Network |
  20. | address-scope | Address scope |
  21. | timestamp_core | Time Stamp Fields addition for core resources |
  22. | extra_dhcp_opt | Neutron Extra DHCP opts |
  23. | security-group | security-group |
  24. | rbac-policies | RBAC Policies |
  25. | standard-attr-description | standard-attr-description |
  26. | port-security | Port Security |
  27. | allowed-address-pairs | Allowed Address Pairs |
  28. +---------------------------+-----------------------------------------------+

验证

  1. adminrc@root@controller:~$neutron agent-list
  2. +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
  3. | id | agent_type | host | availability_zone | alive | admin_state_up | binary |
  4. +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
  5. | 0cafd3ff-6da0--a6dd-9a60136af93a | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
  6. | 53fce606-311d--8af0-efd6f9087e34 | Open vSwitch agent | controller | | :-) | True | neutron-openvswitch-agent |
  7. | b5dffa68-a505-448f-8fa6-7d8bb16eb07a | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
  8. | dc161e12-8b23-4f49--b7d68cfe2197 | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
  9. +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

创建一个实例

首先需要创建一个虚拟网络

创建一个提供者网络

  1. adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
  2. Invalid input for operation: network_type value 'flat' not supported.
  3. Neutron server returns request_ids: ['req-e9d3cb26-4156-4eb1-bc9e-9528dbbd1dc9']

根据错误提示,需要检查下ml2.conf.ini文件

  1. [ml2]
  2.  
  3. type_drivers = flat,vlan #确认这行内容有flat

重启服务再次运行创建网络

  1. adminrc@root@controller:~$service neutron-server restart
  2. neutron-server stop/waiting
  3. neutron-server start/running, process
  4. adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
  5. Created a new network:
  6. +---------------------------+--------------------------------------+
  7. | Field | Value |
  8. +---------------------------+--------------------------------------+
  9. | admin_state_up | True |
  10. | availability_zone_hints | |
  11. | availability_zones | |
  12. | created_at | --15T12:: |
  13. | description | |
  14. | id | ab73ff8f-2d19--811c-85c068290eeb |
  15. | ipv4_address_scope | |
  16. | ipv6_address_scope | |
  17. | mtu | |
  18. | name | provider |
  19. | port_security_enabled | True |
  20. | provider:network_type | flat |
  21. | provider:physical_network | provider |
  22. | provider:segmentation_id | |
  23. | router:external | False |
  24. | shared | True |
  25. | status | ACTIVE |
  26. | subnets | |
  27. | tags | |
  28. | tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
  29. | updated_at | --15T12:: |
  30. +---------------------------+--------------------------------------+

接着创建一个子网

  1. adminrc@root@controller:~$neutron subnet-create --name provider --allocation-pool start=10.0.3.50,end=10.0.3.253 --dns-nameserver 114.114.114.114 --gateway 10.0.3.1 provider 10.0.3.0/
  2. Created a new subnet:
  3. +-------------------+---------------------------------------------+
  4. | Field | Value |
  5. +-------------------+---------------------------------------------+
  6. | allocation_pools | {"start": "10.0.3.50", "end": "10.0.3.253"} |
  7. | cidr | 10.0.3.0/ |
  8. | created_at | --15T12:: |
  9. | description | |
  10. | dns_nameservers | 114.114.114.114 |
  11. | enable_dhcp | True |
  12. | gateway_ip | 10.0.3.1 |
  13. | host_routes | |
  14. | id | 48faef6d-ee9d-4b46-a56d-3c196a766224 |
  15. | ip_version | |
  16. | ipv6_address_mode | |
  17. | ipv6_ra_mode | |
  18. | name | provider |
  19. | network_id | ab73ff8f-2d19--811c-85c068290eeb |
  20. | subnetpool_id | |
  21. | tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
  22. | updated_at | --15T12:: |
  23. +-------------------+---------------------------------------------+

接着创建一个虚拟主机

  1. adminrc@root@controller:~$openstack flavor create --id --vcpus --ram --disk m1.nano
  2. +----------------------------+---------+
  3. | Field | Value |
  4. +----------------------------+---------+
  5. | OS-FLV-DISABLED:disabled | False |
  6. | OS-FLV-EXT-DATA:ephemeral | |
  7. | disk | |
  8. | id | |
  9. | name | m1.nano |
  10. | os-flavor-access:is_public | True |
  11. | ram | |
  12. | rxtx_factor | 1.0 |
  13. | swap | |
  14. | vcpus | |
  15. +----------------------------+---------+

生成一个键值对

  1. adminrc@root@controller:~$pwd
  2. /home/openstack
  3. adminrc@root@controller:~$ssh-keygen
  4. Generating public/private rsa key pair.
  5. Enter file in which to save the key (/root/.ssh/id_rsa):
  6. Created directory '/root/.ssh'.
  7. Enter passphrase (empty for no passphrase):
  8. Enter same passphrase again:
  9. Your identification has been saved in /root/.ssh/id_rsa.
  10. Your public key has been saved in /root/.ssh/id_rsa.pub.
  11. The key fingerprint is:
  12. 8a:e5:a2:f3:f4:1e::1a:c1:8d::d1:fd:fa:4b: root@controller
  13. The key's randomart image is:
  14. +--[ RSA ]----+
  15. | |
  16. | . . |
  17. | . . . |
  18. | . o . . |
  19. | + = S . . E|
  20. | B o . . . |
  21. | = * . . |
  22. | .o = o o |
  23. | .oo.o o. |
  24. +-----------------+
  25. adminrc@root@controller:~$ls -al /root/.ssh/id_rsa.pub
  26. -rw-r--r-- root root Jan : /root/.ssh/id_rsa.pub

添加密钥对

  1. adminrc@root@controller:~$openstack keypair create --public-key /root/.ssh/id_rsa.pub rootkey
  2. +-------------+-------------------------------------------------+
  3. | Field | Value |
  4. +-------------+-------------------------------------------------+
  5. | fingerprint | 8a:e5:a2:f3:f4:1e::1a:c1:8d::d1:fd:fa:4b: |
  6. | name | rootkey |
  7. | user_id | 653177098fac40a28734093706299e66 |
  8. +-------------+-------------------------------------------------+

验证密钥对

  1. adminrc@root@controller:~$openstack keypair list
  2. +---------+-------------------------------------------------+
  3. | Name | Fingerprint |
  4. +---------+-------------------------------------------------+
  5. | rootkey | 8a:e5:a2:f3:f4:1e::1a:c1:8d::d1:fd:fa:4b: |
  6. +---------+-------------------------------------------------+

增加安全组规则

  1. adminrc@root@controller:~$openstack security group rule create --proto icmp default
  2. +-----------------------+--------------------------------------+
  3. | Field | Value |
  4. +-----------------------+--------------------------------------+
  5. | id | a4c8ad46-42eb--b09f-af5dcfef2ad1 |
  6. | ip_protocol | icmp |
  7. | ip_range | 0.0.0.0/ |
  8. | parent_group_id | 968f5f33-c569-46b4--8a3f614ae670 |
  9. | port_range | |
  10. | remote_security_group | |
  11. +-----------------------+--------------------------------------+
  12. adminrc@root@controller:~$openstack security group rule create --proto tcp --dst-port default
  13. +-----------------------+--------------------------------------+
  14. | Field | Value |
  15. +-----------------------+--------------------------------------+
  16. | id | 8ed34a22----94ec284e4764 |
  17. | ip_protocol | tcp |
  18. | ip_range | 0.0.0.0/ |
  19. | parent_group_id | 968f5f33-c569-46b4--8a3f614ae670 |
  20. | port_range | : |
  21. | remote_security_group | |
  22. +-----------------------+--------------------------------------+

开始创建实例

  1. # 列出可用类型
  2. adminrc@root@controller:~$openstack flavor list
  3. +----+-----------+-------+------+-----------+-------+-----------+
  4. | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
  5. +----+-----------+-------+------+-----------+-------+-----------+
  6. | | m1.nano | | | | | True |
  7. | | m1.tiny | | | | | True |
  8. | | m1.small | | | | | True |
  9. | | m1.medium | | | | | True |
  10. | | m1.large | | | | | True |
  11. | | m1.xlarge | | | | | True |
  12. +----+-----------+-------+------+-----------+-------+-----------+
  13. # 列出可用镜像
  14. adminrc@root@controller:~$openstack image list
  15. +--------------------------------------+---------+--------+
  16. | ID | Name | Status |
  17. +--------------------------------------+---------+--------+
  18. | 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
  19. +--------------------------------------+---------+--------+
  20. # 列出可用网络
  21. adminrc@root@controller:~$openstack network list
  22. +--------------------------------------+----------+--------------------------------------+
  23. | ID | Name | Subnets |
  24. +--------------------------------------+----------+--------------------------------------+
  25. | ab73ff8f-2d19--811c-85c068290eeb | provider | 48faef6d-ee9d-4b46-a56d-3c196a766224 |
  26. +--------------------------------------+----------+--------------------------------------+
  27. # 列出可用安全组规则
  28. adminrc@root@controller:~$openstack security group list
  29. +--------------------------------------+---------+------------------------+----------------------------------+
  30. | ID | Name | Description | Project |
  31. +--------------------------------------+---------+------------------------+----------------------------------+
  32. | 968f5f33-c569-46b4--8a3f614ae670 | default | Default security group | 29577090a0e8466ab49cc30a4305f5f8 |
  33. +--------------------------------------+---------+------------------------+----------------------------------+
  34. # 创建实例
  35. adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirros --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
  36. No image with a name or ID of 'cirros' exists.
  37. # 好吧 又有事情了
  38. # 再次查看可用镜像,好像发现问题所在了,我输入的是cirros,而可用镜像的name的值cirrors。
  39. adminrc@root@controller:~$openstack image list
  40. +--------------------------------------+---------+--------+
  41. | ID | Name | Status |
  42. +--------------------------------------+---------+--------+
  43. | 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
  44. +--------------------------------------+---------+--------+
  45. adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
  46. +--------------------------------------+------------------------------------------------+
  47. | Field | Value |
  48. +--------------------------------------+------------------------------------------------+
  49. | OS-DCF:diskConfig | MANUAL |
  50. | OS-EXT-AZ:availability_zone | |
  51. | OS-EXT-SRV-ATTR:host | None |
  52. | OS-EXT-SRV-ATTR:hypervisor_hostname | None |
  53. | OS-EXT-SRV-ATTR:instance_name | instance- |
  54. | OS-EXT-STS:power_state | |
  55. | OS-EXT-STS:task_state | scheduling |
  56. | OS-EXT-STS:vm_state | building |
  57. | OS-SRV-USG:launched_at | None |
  58. | OS-SRV-USG:terminated_at | None |
  59. | accessIPv4 | |
  60. | accessIPv6 | |
  61. | addresses | |
  62. | adminPass | WeVy7yd6BXcc |
  63. | config_drive | |
  64. | created | --15T13::19Z |
  65. | flavor | m1.nano () |
  66. | hostId | |
  67. | id | 9eb49f96-7d68--bb37-7583e457edc6 |
  68. | image | cirrors (39d73bcf-e60b-4caf--cca17de00d7e) |
  69. | key_name | rootkey |
  70. | name | test-instance |
  71. | os-extended-volumes:volumes_attached | [] |
  72. | progress | |
  73. | project_id | 29577090a0e8466ab49cc30a4305f5f8 |
  74. | properties | |
  75. | security_groups | [{u'name': u'default'}] |
  76. | status | BUILD |
  77. | updated | --15T13::20Z |
  78. | user_id | 653177098fac40a28734093706299e66 |
  79. +--------------------------------------+------------------------------------------------+
  80. 创建成功

查看相关实例

  1. adminrc@root@controller:~$openstack server list
  2. +--------------------------------------+---------------+--------+--------------------+
  3. | ID | Name | Status | Networks |
  4. +--------------------------------------+---------------+--------+--------------------+
  5. | 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | provider=10.0.3.51 |
  6. +--------------------------------------+---------------+--------+--------------------+
  7. adminrc@root@controller:~$nova image-list
  8. +--------------------------------------+---------+--------+--------+
  9. | ID | Name | Status | Server |
  10. +--------------------------------------+---------+--------+--------+
  11. | 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | ACTIVE | |
  12. +--------------------------------------+---------+--------+--------+
  13. adminrc@root@controller:~$glance image-list
  14. +--------------------------------------+---------+
  15. | ID | Name |
  16. +--------------------------------------+---------+
  17. | 39d73bcf-e60b-4caf--cca17de00d7e | cirrors |
  18. +--------------------------------------+---------+
  19. adminrc@root@controller:~$nova list
  20. +--------------------------------------+---------------+--------+------------+-------------+--------------------+
  21. | ID | Name | Status | Task State | Power State | Networks |
  22. +--------------------------------------+---------------+--------+------------+-------------+--------------------+
  23. | 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | - | Running | provider=10.0.3.51 |
  24. +--------------------------------------+---------------+--------+------------+-------------+--------------------+

启动实例的命令

  1. adminrc@root@controller:~$openstack boot --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance

debug

  1. adminrc@root@controller:~$openstack --debug server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance

使用虚拟控制台访问实例

  1. adminrc@root@controller:~$openstack console url show test-instance
  2. +-------+------------------------------------------------------------------------------------+
  3. | Field | Value |
  4. +-------+------------------------------------------------------------------------------------+
  5. | type | novnc |
  6. | url | http://192.168.56.10:6080/vnc_auto.html?token=ce586e5f-ceb1-4f7d-b039-0e44ae273686 |
  7. +-------+------------------------------------------------------------------------------------+

提示很明显

用户名:cirros

密码:cubswin:)

使用sudo切换至root用户。

接下来查看

测试网络连通性

接着创建第二个

  1. adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
  2. +--------------------------------------+------------------------------------------------+
  3. | Field | Value |
  4. +--------------------------------------+------------------------------------------------+
  5. | OS-DCF:diskConfig | MANUAL |
  6. | OS-EXT-AZ:availability_zone | |
  7. | OS-EXT-SRV-ATTR:host | None |
  8. | OS-EXT-SRV-ATTR:hypervisor_hostname | None |
  9. | OS-EXT-SRV-ATTR:instance_name | instance- |
  10. | OS-EXT-STS:power_state | |
  11. | OS-EXT-STS:task_state | scheduling |
  12. | OS-EXT-STS:vm_state | building |
  13. | OS-SRV-USG:launched_at | None |
  14. | OS-SRV-USG:terminated_at | None |
  15. | accessIPv4 | |
  16. | accessIPv6 | |
  17. | addresses | |
  18. | adminPass | QrFxY7UnvuJV |
  19. | config_drive | |
  20. | created | --15T14::15Z |
  21. | flavor | m1.nano () |
  22. | hostId | |
  23. | id | 203a1f48-1f98-44ca-a3fa-883a9cea514a |
  24. | image | cirrors (39d73bcf-e60b-4caf--cca17de00d7e) |
  25. | key_name | rootkey |
  26. | name | test-instance |
  27. | os-extended-volumes:volumes_attached | [] |
  28. | progress | |
  29. | project_id | 29577090a0e8466ab49cc30a4305f5f8 |
  30. | properties | |
  31. | security_groups | [{u'name': u'default'}] |
  32. | status | BUILD |
  33. | updated | --15T14::15Z |
  34. | user_id | 653177098fac40a28734093706299e66 |
  35. +--------------------------------------+------------------------------------------------+
  36. 查看
  37. adminrc@root@controller:~$nova list
  38. +--------------------------------------+---------------+--------+------------+-------------+--------------------+
  39. | ID | Name | Status | Task State | Power State | Networks |
  40. +--------------------------------------+---------------+--------+------------+-------------+--------------------+
  41. | 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | - | Running | provider=10.0.3.52 |
  42. | 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | - | Running | provider=10.0.3.51 |
  43. +--------------------------------------+---------------+--------+------------+-------------+--------------------+

此时已经创建好了两台虚拟实例,并且已经处于running状态。

实例2我们使用命令行演示下

  1. adminrc@root@controller:~$ping -c 10.0.3.52
  2. PING 10.0.3.52 (10.0.3.52) () bytes of data.
  3. bytes from 10.0.3.52: icmp_seq= ttl= time=28.5 ms
  4. bytes from 10.0.3.52: icmp_seq= ttl= time=0.477 ms
  5.  
  6. --- 10.0.3.52 ping statistics ---
  7. packets transmitted, received, % packet loss, time 1001ms
  8. rtt min/avg/max/mdev = 0.477/14.505/28.534/14.029 ms
  9. adminrc@root@controller:~$nova list
  10. +--------------------------------------+---------------+--------+------------+-------------+--------------------+
  11. | ID | Name | Status | Task State | Power State | Networks |
  12. +--------------------------------------+---------------+--------+------------+-------------+--------------------+
  13. | 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | - | Running | provider=10.0.3.52 |
  14. | 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | - | Running | provider=10.0.3.51 |
  15. +--------------------------------------+---------------+--------+------------+-------------+--------------------+

使用openstack console url show 查看

  1. adminrc@root@controller:~$openstack console url show test-instance
  2. More than one server exists with the name 'test-instance'.
  3. # 因为此时有两个server,所以使用id来展示即可
  4. adminrc@root@controller:~$openstack console url show 203a1f48-1f98-44ca-a3fa-883a9cea514a
  5. +-------+------------------------------------------------------------------------------------+
  6. | Field | Value |
  7. +-------+------------------------------------------------------------------------------------+
  8. | type | novnc |
  9. | url | http://192.168.56.10:6080/vnc_auto.html?token=42c43635-884c-482e-ac08-d1e6c6d2789b |
  10. +-------+------------------------------------------------------------------------------------+

# 注意这里不知道为什么ssh不可以,按说配置了安全组规则后可以使用ssh cirros@10.0.3.52直接登上去,但是会提示输入密码,这一步暂时是个问题。。。。

哦...目前只知道使用这种方法获取用户名及密码

使用命令行测试

  1. adminrc@root@controller:~$ssh cirros@10.0.3.52
  2. cirros@10.0.3.52's password:# cubswin:)
  3.  
  4. $ ifconfig
  5. eth0 Link encap:Ethernet HWaddr FA::3E:::DE
  6. inet addr:10.0.3.52 Bcast:10.0.3.255 Mask:255.255.255.0
  7. inet6 addr: fe80::f816:3eff:fe07:21de/ Scope:Link
  8. UP BROADCAST RUNNING MULTICAST MTU: Metric:
  9. RX packets: errors: dropped: overruns: frame:
  10. TX packets: errors: dropped: overruns: carrier:
  11. collisions: txqueuelen:
  12. RX bytes: (17.4 KiB) TX bytes: (16.8 KiB)
  13.  
  14. lo Link encap:Local Loopback
  15. inet addr:127.0.0.1 Mask:255.0.0.0
  16. inet6 addr: ::/ Scope:Host
  17. UP LOOPBACK RUNNING MTU: Metric:
  18. RX packets: errors: dropped: overruns: frame:
  19. TX packets: errors: dropped: overruns: carrier:
  20. collisions: txqueuelen:
  21. RX bytes: (0.0 B) TX bytes: (0.0 B)
  22.  
  23. $ ping -c 10.0.3.1
  24. PING 10.0.3.1 (10.0.3.1): data bytes
  25. bytes from 10.0.3.1: seq= ttl= time=45.026 ms
  26. bytes from 10.0.3.1: seq= ttl= time=1.050 ms
  27.  
  28. --- 10.0.3.1 ping statistics ---
  29. packets transmitted, packets received, % packet loss
  30. round-trip min/avg/max = 1.050/23.038/45.026 ms
  31. $ ping -c www.qq.com
  32. PING www.qq.com (61.129.7.47): data bytes
  33. bytes from 61.129.7.47: seq= ttl= time=5.527 ms
  34. bytes from 61.129.7.47: seq= ttl= time=5.363 ms
  35.  
  36. --- www.qq.com ping statistics ---
  37. packets transmitted, packets received, % packet loss
  38. round-trip min/avg/max = 5.363/5.445/5.527 ms

测试两个实例之间的连通性

  1. $ sudo -s
  2. $ hostname
  3. cirros
  4. $ ping -c 10.0.3.51
  5. PING 10.0.3.51 (10.0.3.51): data bytes
  6. bytes from 10.0.3.51: seq= ttl= time=28.903 ms
  7. bytes from 10.0.3.51: seq= ttl= time=1.205 ms
  8.  
  9. --- 10.0.3.51 ping statistics ---
  10. packets transmitted, packets received, % packet loss
  11. round-trip min/avg/max = 1.205/15.054/28.903 m

对于私有网络服务

安装组件

  1. root@controller:~# apt-get install -y neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

由于这步是在原有的公共网络服务基础上配置的,有些配置文件需要更改

确认配置neutron.conf文件信息

  1. root@controller:~# ls /etc/neutron/neutron.*
  2. neutron.conf neutron.conf.bak
  3. root@controller:~# vim default
  4. root@controller:~# cat default
  5. core_plugin = ml2 #注意首行顶格写没有空行才行
  6. service_plugins = router
  7. allow_overlapping_ips = True
  8. rpc_backend = rabbit
  9. auth_strategy = keystone
  10. notify_nova_on_port_status_changes = True
  11. notify_nova_on_port_data_changes = True
  12.  
  13. root@controller:~# grep "`cat default`" /etc/neutron/neutron.conf
  14. auth_strategy = keystone
  15. core_plugin = ml2
  16. service_plugins = router
  17. allow_overlapping_ips = True
  18. rpc_backend = rabbit
  19.  
  20. root@controller:~# grep "^connection" /etc/neutron/neutron.conf
  21. connection = mysql+pymysql://neutron:123456@controller/neutron
  22. root@controller:~# grep "core_plugin" /etc/neutron/neutron.conf
  23. core_plugin = ml2
  24. root@controller:~# grep "service_plugins" /etc/neutron/neutron.conf
  25. service_plugins =
  26. root@controller:~# sed -i "s/service_plugins\=/service_plugins\ =\ router/g" /etc/neutron/neutron.conf
  27. root@controller:~# grep "service_plugins" /etc/neutron/neutron.conf service_plugins = router
  28. root@controller:~# grep "allow_overlapping_ips" /etc/neutron/neutron.conf #allow_overlapping_ips = false
  29. root@controller:~# sed -i "s/\#allow_overlapping_ips\ =\ false/allow_overlapping_ips\ =\ True/g" /etc/neutron/neutron.conf
  30. root@controller:~# grep "allow_overlapping_ips" /etc/neutron/neutron.conf allow_overlapping_ips = True
  31. root@controller:~# grep "rpc_backend = rabbit" /etc/neutron/neutron.conf
  32. rpc_backend = rabbit
  33. root@controller:~# grep "rabbit_host = controller" /etc/neutron/neutron.conf
  34. rabbit_host = controller
  35. root@controller:~# grep "rabbit_userid = openstack" /etc/neutron/neutron.conf
  36. rabbit_userid = openstack
  37. root@controller:~# grep "rabbit_password = 123456" /etc/neutron/neutron.conf rabbit_password =
  38. root@controller:~# cat keystone_authtoken
  39. auth_uri = http://controller:5000
  40. auth_url = http://controller:35357
  41. memcached_servers = controller:
  42. auth_type = password
  43. project_domain_name = default
  44. user_domain_name = default
  45. project_name = service
  46. username = neutron
  47. password = neutron
  48. root@controller:~# grep "`cat keystone_authtoken`" /etc/neutron/neutron.conf
  49. auth_url = http://controller:35357
  50. memcached_servers = controller:
  51. auth_type = password
  52. project_domain_name = default
  53. user_domain_name = default
  54. project_name = service
  55. username = neutron
  56. password = neutron
  57. auth_url = http://controller:35357
  58. auth_type = password
  59. project_domain_name = default
  60. user_domain_name = default
  61. project_name = service
  62.  
  63. root@controller:~# grep "`cat oslo_messaging_rabbit`" /etc/neutron/neutron.conf
  64. rabbit_host = controller
  65. rabbit_userid = openstack
  66. rabbit_password =
  67.  
  68. root@controller:~# vim nova
  69. root@controller:~# cat nova
  70. auth_url = http://controller:35357
  71. auth_type = password
  72. project_domain_name = default
  73. user_domain_name = default
  74. region_name = RegionOne
  75. project_name = service
  76. username = nova
  77. password = nova
  78.  
  79. root@controller:~# grep "`cat nova`" /etc/neutron/neutron.conf
  80. auth_url = http://controller:35357
  81. auth_type = password
  82. project_domain_name = default
  83. user_domain_name = default
  84. project_name = service
  85. auth_url = http://controller:35357
  86. auth_type = password
  87. project_domain_name = default
  88. user_domain_name = default
  89. region_name = RegionOne
  90. project_name = service
  91. username = nova
  92. password = nova

也可以这样

  1. root@controller:~# vim neutron
  2. root@controller:~# cat neutron
  3. ^\[database\]
  4. connection = mysql+pymysql://neutron:123456@controller/neutron
  5. ^\[DEFAULT\]
  6. core_plugin = ml2
  7. service_plugins = router
  8. allow_overlapping_ips = True
  9. rpc_backend = rabbit
  10. auth_strategy = keystone
  11. notify_nova_on_port_status_changes = True
  12. notify_nova_on_port_data_changes = True
  13. ^\[oslo_messaging_rabbit\]
  14. rabbit_host = controller
  15. rabbit_userid = openstack
  16. rabbit_password =
  17. ^\[keystone_authtoken\]
  18. auth_uri = http://controller:5000
  19. auth_url = http://controller:35357
  20. memcached_servers = controller:
  21. auth_type = password
  22. project_domain_name = default
  23. user_domain_name = default
  24. project_name = service
  25. username = neutron
  26. password = neutron
  27. ^\[nova\]
  28. auth_url = http://controller:35357
  29. auth_type = password
  30. project_domain_name = default
  31. user_domain_name = default
  32. region_name = RegionOne
  33. project_name = service
  34. username = nova
  35. password = nova
  36.  
  37. root@controller:~# grep "`cat neutron`" /etc/neutron/neutron.conf
  38. [DEFAULT]
  39. auth_strategy = keystone
  40. core_plugin = ml2
  41. service_plugins = router
  42. allow_overlapping_ips = True
  43. rpc_backend = rabbit
  44. [database]
  45. connection = mysql+pymysql://neutron:123456@controller/neutron
  46. [keystone_authtoken]
  47. auth_url = http://controller:35357
  48. memcached_servers = controller:
  49. auth_type = password
  50. project_domain_name = default
  51. user_domain_name = default
  52. project_name = service
  53. username = neutron
  54. password = neutron
  55. [nova]
  56. auth_url = http://controller:35357
  57. auth_type = password
  58. project_domain_name = default
  59. user_domain_name = default
  60. region_name = RegionOne
  61. project_name = service
  62. username = nova
  63. [oslo_messaging_rabbit]
  64. rabbit_host = controller
  65. rabbit_userid = openstack

确认ml2_conf.ini

  1. root@controller:~# cat ml2
  2. type_drivers = flat,vlan,vxlan
  3. tenant_network_types = vxlan
  4. mechanism_drivers = linuxbridge,l2population
  5. extension_drivers = port_security
  6. flat_networks = provider
  7. vni_ranges = :
  8. enable_ipset = True
  9. # 在/etc/neutron/plugins/ml2/ml2_conf.ini添加上述内容,也可以一项一项找,然后取消注释更改为上述对应的值

完事之后配置linuxbridge_agent.ini

  1. root@controller:~# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.public_net}
  2. root@controller:~# vim linuxbridge
  3. root@controller:~# cat linuxbridge
  4. # 将linuxbridge_agent.ini文件中的以下选项按以下配置,没有的选项请添加
  5. [linux_bridge]
  6. physical_interface_mappings = provider:eth0
  7. [vxlan]
  8. enable_vxlan = True
  9. local_ip = 10.0.3.10
  10. l2_population = True
  11. [securitygroup]
  12. enable_security_group = True
  13. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置layer-3代理

  1. root@controller:~# cat l3_agent.ini
  2. [DEFAULT]
  3. interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  4. external_network_bridge =
  5. root@controller:~# vim /etc/neutron/l3_agent.ini
  6. #将此文件中的与l3_agent.ini文件中的对应的选项按如上配置

配置DHCP代理

  1. root@controller:~# cp /etc/neutron/dhcp_agent.ini{,.back}
  2. root@controller:~# vim /etc/neutron/dhcp_agent.ini
  3. root@controller:~# cat dhcp_agent
  4. [DEFAULT]
  5. interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  6. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  7. enable_isolated_metadata = True
  8. # 将dhcp_agent.ini文件中的选项内容按dehcp_agent中的内容填写

配置元数据代理

  1. root@controller:~# cat metadata_agent
  2. nova_metadata_ip = controller
  3. metadata_proxy_shared_secret = METADATA_SECRET
  4. root@controller:~# grep "`cat metadata_agent`" /etc/neutron/metadata_agent.ini
  5. nova_metadata_ip = controller
  6. metadata_proxy_shared_secret = METADATA_SECRET

为计算节点配置网络服务

  1. root@controller:~# cp /etc/nova/nova.conf{,.public_net}
  2. root@controller:~# vim nova
  3. root@controller:~# cat nova
  4. ^\[neutron\]
  5. url = http://controller:9696
  6. auth_url = http://controller:35357
  7. auth_type = password
  8. project_domain_name = default
  9. user_domain_name = default
  10. region_name = RegionOne
  11. project_name = service
  12. username = neutron
  13. password = neutron
  14. service_metadata_proxy = True
  15. metadata_proxy_shared_secret = METADATA_SECRET
  16. root@controller:~# grep "`cat nova`" /etc/nova/nova.conf
  17. auth_url = http://controller:35357
  18. auth_type = password
  19. project_domain_name = default
  20. user_domain_name = default
  21. project_name = service
  22. [neutron]
  23. url = http://controller:9696
  24. auth_url = http://controller:35357
  25. auth_type = password
  26. project_domain_name = default
  27. user_domain_name = default
  28. region_name = RegionOne
  29. project_name = service
  30. username = neutron
  31. password = neutron
  32. service_metadata_proxy = True
  33. metadata_proxy_shared_secret = METADATA_SECRET

完成安装,同步数据库

  1. root@controller:~# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  2. No handlers could be found for logger "oslo_config.cfg"
  3. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  4. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  5. Running upgrade for neutron ...
  6. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  7. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  8. OK
  9. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  10. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  11. Running upgrade for neutron-fwaas ...
  12. INFO [alembic.runtime.migration] Context impl MySQLImpl.
  13. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  14. OK
  15. root@controller:~# echo $?

重启服务

  1. root@controller:~# ls /etc/init.d/ | grep nova
  2. nova-api
  3. nova-compute
  4. nova-conductor
  5. nova-consoleauth
  6. nova-novncproxy
  7. nova-scheduler
  8. root@controller:~# ls /etc/init.d/ | grep nova | xargs -i service {} restart
  9. nova-api stop/waiting
  10. nova-api start/running, process
  11. nova-compute stop/waiting
  12. nova-compute start/running, process
  13. nova-conductor stop/waiting
  14. nova-conductor start/running, process
  15. nova-consoleauth stop/waiting
  16. nova-consoleauth start/running, process
  17. nova-novncproxy stop/waiting
  18. nova-novncproxy start/running, process
  19. nova-scheduler stop/waiting
  20. nova-scheduler start/running, process
  21. 重启网络服务
  22. root@controller:~# ls /etc/init.d/ | grep neutron
  23. neutron-dhcp-agent
  24. neutron-l3-agent
  25. neutron-linuxbridge-agent
  26. neutron-linuxbridge-cleanup
  27. neutron-metadata-agent
  28. neutron-openvswitch-agent
  29. neutron-ovs-cleanup
  30. neutron-server
  31. root@controller:~# ls /etc/init.d/ | grep neutron | xargs -i service {} restart
  32. neutron-dhcp-agent stop/waiting
  33. neutron-dhcp-agent start/running, process
  34. neutron-l3-agent stop/waiting
  35. neutron-l3-agent start/running, process
  36. neutron-linuxbridge-agent stop/waiting
  37. neutron-linuxbridge-agent start/running, process
  38. stop: Unknown instance:
  39. start: Job failed to start
  40. neutron-metadata-agent stop/waiting
  41. neutron-metadata-agent start/running, process
  42. neutron-openvswitch-agent stop/waiting
  43. neutron-openvswitch-agent start/running, process
  44. neutron-ovs-cleanup stop/waiting
  45. neutron-ovs-cleanup start/running
  46. neutron-server stop/waiting
  47. neutron-server start/running, process

验证

  1. root@controller:~# source adminrc
  2. adminrc@root@controller:~$neutron ext-list
  3. +---------------------------+-----------------------------------------------+
  4. | alias | name |
  5. +---------------------------+-----------------------------------------------+
  6. | default-subnetpools | Default Subnetpools |
  7. | network-ip-availability | Network IP Availability |
  8. | network_availability_zone | Network Availability Zone |
  9. | auto-allocated-topology | Auto Allocated Topology Services |
  10. | ext-gw-mode | Neutron L3 Configurable external gateway mode |
  11. | binding | Port Binding |
  12. | agent | agent |
  13. | subnet_allocation | Subnet Allocation |
  14. | l3_agent_scheduler | L3 Agent Scheduler |
  15. | tag | Tag support |
  16. | external-net | Neutron external network |
  17. | net-mtu | Network MTU |
  18. | availability_zone | Availability Zone |
  19. | quotas | Quota management support |
  20. | l3-ha | HA Router extension |
  21. | provider | Provider Network |
  22. | multi-provider | Multi Provider Network |
  23. | address-scope | Address scope |
  24. | extraroute | Neutron Extra Route |
  25. | timestamp_core | Time Stamp Fields addition for core resources |
  26. | router | Neutron L3 Router |
  27. | extra_dhcp_opt | Neutron Extra DHCP opts |
  28. | security-group | security-group |
  29. | dhcp_agent_scheduler | DHCP Agent Scheduler |
  30. | router_availability_zone | Router Availability Zone |
  31. | rbac-policies | RBAC Policies |
  32. | standard-attr-description | standard-attr-description |
  33. | port-security | Port Security |
  34. | allowed-address-pairs | Allowed Address Pairs |
  35. | dvr | Distributed Virtual Router |
  36. +---------------------------+-----------------------------------------------+
  37. adminrc@root@controller:~$neutron agent-list
  38. +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
  39. | id | agent_type | host | availability_zone | alive | admin_state_up | binary |
  40. +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
  41. | 0cafd3ff-6da0--a6dd-9a60136af93a | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
  42. | 53fce606-311d--8af0-efd6f9087e34 | Open vSwitch agent | controller | | :-) | True | neutron-openvswitch-agent |
  43. | 7afb1ed4---b1f8-4e0c6f06fe71 | L3 agent | controller | nova | :-) | True | neutron-l3-agent |
  44. | b5dffa68-a505-448f-8fa6-7d8bb16eb07a | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
  45. | dc161e12-8b23-4f49--b7d68cfe2197 | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
  46. +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
  47. adminrc@root@controller:~$

创建虚拟网络,这里首先需要创建提供者网络,创建提供者网络的步骤与公有网络创建提供者网络的步骤一样,这里由于没有进行虚拟机快照还原操作,所以之前在公有网络配置的时候provider已经存在了,这里为了方便,首先删除掉公有网络创建的虚拟网络和两个实例

  1. # 删除实例
  2. adminrc@root@controller:~$openstack server list
  3. +--------------------------------------+---------------+--------+--------------------+
  4. | ID | Name | Status | Networks |
  5. +--------------------------------------+---------------+--------+--------------------+
  6. | 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | provider=10.0.3.52 |
  7. | 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | provider=10.0.3.51 |
  8. +--------------------------------------+---------------+--------+--------------------+
  9.  
  10. adminrc@root@controller:~$openstack server delete 203a1f48-1f98-44ca-a3fa-883a9cea514a
  11. adminrc@root@controller:~$echo $?
  12.  
  13. adminrc@root@controller:~$openstack server delete 9eb49f96-7d68--bb37-7583e457edc6
  14. adminrc@root@controller:~$echo $?
  15.  
  16. # 删除虚拟网络
  17. adminrc@root@controller:~$neutron net-list
  18. +--------------------------------------+----------+--------------------------------------------------+
  19. | id | name | subnets |
  20. +--------------------------------------+----------+--------------------------------------------------+
  21. | ab73ff8f-2d19--811c-85c068290eeb | provider | 48faef6d-ee9d-4b46-a56d-3c196a766224 10.0.3.0/ |
  22. +--------------------------------------+----------+--------------------------------------------------+
  23. adminrc@root@controller:~$neutron net-delete ab73ff8f-2d19--811c-85c068290eeb
  24. Deleted network: ab73ff8f-2d19--811c-85c068290eeb
  25. adminrc@root@controller:~$neutron net-list
  26.  
  27. adminrc@root@controller:~$neutron subnet-list

创建网络提供者

  1. adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
  2. Created a new network:
  3. +---------------------------+--------------------------------------+
  4. | Field | Value |
  5. +---------------------------+--------------------------------------+
  6. | admin_state_up | True |
  7. | availability_zone_hints | |
  8. | availability_zones | |
  9. | created_at | --16T00:: |
  10. | description | |
  11. | id | a600cdf0-352a-4c85-b90a-eba0ee4282fd |
  12. | ipv4_address_scope | |
  13. | ipv6_address_scope | |
  14. | mtu | |
  15. | name | provider |
  16. | port_security_enabled | True |
  17. | provider:network_type | flat |
  18. | provider:physical_network | provider |
  19. | provider:segmentation_id | |
  20. | router:external | False |
  21. | shared | True |
  22. | status | ACTIVE |
  23. | subnets | |
  24. | tags | |
  25. | tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
  26. | updated_at | --16T00:: |
  27. +---------------------------+--------------------------------------+
  28. 创建子网
  29. adminrc@root@controller:~$neutron subnet-create --name provider --allocation-pool start=10.0.3.50,end=10.0.3.254 --dns-nameserver 114.114.114.114 --gateway 10.0.3.1 provider 10.0.3.0/
  30. Created a new subnet:
  31. +-------------------+---------------------------------------------+
  32. | Field | Value |
  33. +-------------------+---------------------------------------------+
  34. | allocation_pools | {"start": "10.0.3.50", "end": "10.0.3.254"} |
  35. | cidr | 10.0.3.0/ |
  36. | created_at | --16T00:: |
  37. | description | |
  38. | dns_nameservers | 114.114.114.114 |
  39. | enable_dhcp | True |
  40. | gateway_ip | 10.0.3.1 |
  41. | host_routes | |
  42. | id | b19d9f26-e32e-4bb8-a53e-55eb1154cefe |
  43. | ip_version | |
  44. | ipv6_address_mode | |
  45. | ipv6_ra_mode | |
  46. | name | provider |
  47. | network_id | a600cdf0-352a-4c85-b90a-eba0ee4282fd |
  48. | subnetpool_id | |
  49. | tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
  50. | updated_at | --16T00:: |
  51. +-------------------+---------------------------------------------+

接着创建私有网络,这里遇到一个小错误

  1. adminrc@root@controller:~$source demorc
  2. demorc@root@controller:~$neutron net-create selfservice
  3. Unable to create the network. No tenant network is available for allocation.
  4. Neutron server returns request_ids: ['req-c2deaa15-c2eb-48b7-9510-644b3ae4f686']
  5. # 排错
  6. demorc@root@controller:~$ neutron net-list
  7. +--------------------------------------+----------+--------------------------------------------------+
  8. | id | name | subnets |
  9. +--------------------------------------+----------+--------------------------------------------------+
  10. | a600cdf0-352a-4c85-b90a-eba0ee4282fd | provider | b19d9f26-e32e-4bb8-a53e-55eb1154cefe 10.0.3.0/ |
  11. +--------------------------------------+----------+--------------------------------------------------+
  12. demorc@root@controller:~$neutron subnet-list
  13. +--------------------------------------+----------+-------------+---------------------------------------------+
  14. | id | name | cidr | allocation_pools |
  15. +--------------------------------------+----------+-------------+---------------------------------------------+
  16. | b19d9f26-e32e-4bb8-a53e-55eb1154cefe | provider | 10.0.3.0/ | {"start": "10.0.3.50", "end": "10.0.3.254"} |
  17. +--------------------------------------+----------+-------------+---------------------------------------------+
  18. demorc@root@controller:~$tail /var/log/neutron/neutron-server.log
  19. -- ::14.834 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line , in create_network_segments
  20. -- ::14.834 ERROR neutron.api.v2.resource segment = self._allocate_tenant_net_segment(session)
  21. -- ::14.834 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line , in _allocate_tenant_net_segment
  22. -- ::14.834 ERROR neutron.api.v2.resource raise exc.NoNetworkAvailable()
  23. -- ::14.834 ERROR neutron.api.v2.resource NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.
  24. -- ::14.834 ERROR neutron.api.v2.resource
  25. -- ::14.846 INFO neutron.wsgi [req-c2deaa15-c2eb-48b7--644b3ae4f686 c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "POST /v2.0/networks.json HTTP/1.1" 0.565548
  26. -- ::32.517 INFO neutron.wsgi [req-d15a0c85----6580c476d12a c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "GET /v2.0/networks.json HTTP/1.1" 0.559720
  27. -- ::32.636 INFO neutron.wsgi [req-6d8fe235-340d-4fe5-897c-f8eee16e3b5e c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "GET /v2.0/subnets.json?fields=id&fields=cidr&id=b19d9f26-e32e-4bb8-a53e-55eb1154cefe HTTP/1.1" 0.115075
  28. -- ::19.646 INFO neutron.wsgi [req-891d5624-a86e--a81d-641e5cfc0043 c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "GET /v2.0/subnets.json HTTP/1.1" 0.436610
  29. demorc@root@controller:~$
  30. demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
  31. # 确保vni_ranges = :1000在[ml2_type_vxlan]下,而不是在其他项目下
  32. [ml2_type_vxlan]
  33.  
  34. vni_ranges = :

重启nova和neutron服务后再次创建

  1. demorc@root@controller:~$grep -rHn "vni_ranges" /etc/neutron/
  2. /etc/neutron/plugins/ml2/ml2_conf.ini::vni_ranges = :
  3. /etc/neutron/plugins/ml2/ml2_conf.ini::#vni_ranges =
  4. /etc/neutron/plugins/ml2/ml2_conf.ini.bak::#vni_ranges =
  5. /etc/neutron/plugins/ml2/ml2_conf.ini.bak::#vni_ranges =
  6. demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
  7. demorc@root@controller:~$vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  8. demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
  9. demorc@root@controller:~$ls /etc/init.d/ | grep nova | xargs -i service {} restart
  10. nova-api stop/waiting
  11. nova-api start/running, process
  12. nova-compute stop/waiting
  13. nova-compute start/running, process
  14. nova-conductor stop/waiting
  15. nova-conductor start/running, process
  16. nova-consoleauth stop/waiting
  17. nova-consoleauth start/running, process
  18. nova-novncproxy stop/waiting
  19. nova-novncproxy start/running, process
  20. nova-scheduler stop/waiting
  21. nova-scheduler start/running, process
  22. demorc@root@controller:~$ls /etc/init.d/ | grep neutron | xargs -i service {} restart
  23. neutron-dhcp-agent stop/waiting
  24. neutron-dhcp-agent start/running, process
  25. neutron-l3-agent stop/waiting
  26. neutron-l3-agent start/running, process
  27. neutron-linuxbridge-agent stop/waiting
  28. neutron-linuxbridge-agent start/running, process
  29. stop: Unknown instance:
  30. start: Job failed to start
  31. neutron-metadata-agent stop/waiting
  32. neutron-metadata-agent start/running, process
  33. neutron-openvswitch-agent stop/waiting
  34. neutron-openvswitch-agent start/running, process
  35. neutron-ovs-cleanup stop/waiting
  36. neutron-ovs-cleanup start/running
  37. neutron-server stop/waiting
  38. neutron-server start/running, process
  39. demorc@root@controller:~$neutron net-list
  40. +--------------------------------------+----------+--------------------------------------------------+
  41. | id | name | subnets |
  42. +--------------------------------------+----------+--------------------------------------------------+
  43. | b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e | provider | 68f14924-15c4-4b0d-bcfc-011fd5a6de12 10.0.3.0/ |
  44. +--------------------------------------+----------+--------------------------------------------------+
  45. demorc@root@controller:~$neutron subnet-list
  46. +--------------------------------------+----------+-------------+---------------------------------------------+
  47. | id | name | cidr | allocation_pools |
  48. +--------------------------------------+----------+-------------+---------------------------------------------+
  49. | 68f14924-15c4-4b0d-bcfc-011fd5a6de12 | provider | 10.0.3.0/ | {"start": "10.0.3.50", "end": "10.0.3.254"} |
  50. +--------------------------------------+----------+-------------+---------------------------------------------+
  51. demorc@root@controller:~$neutron net-create selfservice
  52. Created a new network:
  53. +-------------------------+--------------------------------------+
  54. | Field | Value |
  55. +-------------------------+--------------------------------------+
  56. | admin_state_up | True |
  57. | availability_zone_hints | |
  58. | availability_zones | |
  59. | created_at | --16T01:: |
  60. | description | |
  61. | id | 66eb76af-e111-4cae-adc6-2df95ad29faf |
  62. | ipv4_address_scope | |
  63. | ipv6_address_scope | |
  64. | mtu | |
  65. | name | selfservice |
  66. | port_security_enabled | True |
  67. | router:external | False |
  68. | shared | False |
  69. | status | ACTIVE |
  70. | subnets | |
  71. | tags | |
  72. | tenant_id | ffc560f6a2604c3896df922115c6fc2a |
  73. | updated_at | --16T01:: |
  74. +-------------------------+--------------------------------------+

创建子网

  1. demorc@root@controller:~$neutron subnet-create --name selfservice --dns-nameserver 114.114.114.114 --gateway 192.168.56.1 selfservice 192.168.56.0/
  2. Created a new subnet:
  3. +-------------------+----------------------------------------------------+
  4. | Field | Value |
  5. +-------------------+----------------------------------------------------+
  6. | allocation_pools | {"start": "192.168.56.2", "end": "192.168.56.254"} |
  7. | cidr | 192.168.56.0/ |
  8. | created_at | --16T01:: |
  9. | description | |
  10. | dns_nameservers | 114.114.114.114 |
  11. | enable_dhcp | True |
  12. | gateway_ip | 192.168.56.1 |
  13. | host_routes | |
  14. | id | 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93 |
  15. | ip_version | |
  16. | ipv6_address_mode | |
  17. | ipv6_ra_mode | |
  18. | name | selfservice |
  19. | network_id | 66eb76af-e111-4cae-adc6-2df95ad29faf |
  20. | subnetpool_id | |
  21. | tenant_id | ffc560f6a2604c3896df922115c6fc2a |
  22. | updated_at | --16T01:: |
  23. +-------------------+----------------------------------------------------+

第二个子网

  1. demorc@root@controller:~$neutron subnet-create --name selfservice --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 selfservice 172.16.1.0/
  2.  
  3. Created a new subnet:
  4. +-------------------+------------------------------------------------+
  5. | Field | Value |
  6. +-------------------+------------------------------------------------+
  7. | allocation_pools | {"start": "172.16.1.2", "end": "172.16.1.254"} |
  8. | cidr | 172.16.1.0/ |
  9. | created_at | --16T01:: |
  10. | description | |
  11. | dns_nameservers | 114.114.114.114 |
  12. | enable_dhcp | True |
  13. | gateway_ip | 172.16.1.1 |
  14. | host_routes | |
  15. | id | ec079b98-a585-40c0-9b4c-340c943642eb |
  16. | ip_version | |
  17. | ipv6_address_mode | |
  18. | ipv6_ra_mode | |
  19. | name | selfservice |
  20. | network_id | 66eb76af-e111-4cae-adc6-2df95ad29faf |
  21. | subnetpool_id | |
  22. | tenant_id | ffc560f6a2604c3896df922115c6fc2a |
  23. | updated_at | --16T01:: |
  24. +-------------------+------------------------------------------------+

创建路由

  1. demorc@root@controller:~$source adminrc
  2. adminrc@root@controller:~$neutron net-update provider --router:external
  3. Updated network: provider
  4. adminrc@root@controller:~$source demorc
  5. demorc@root@controller:~$neutron router-create router
  6. Created a new router:
  7. +-------------------------+--------------------------------------+
  8. | Field | Value |
  9. +-------------------------+--------------------------------------+
  10. | admin_state_up | True |
  11. | availability_zone_hints | |
  12. | availability_zones | |
  13. | description | |
  14. | external_gateway_info | |
  15. | id | 8770421b-2f3b-4d33-9acf-562b36b5b31b |
  16. | name | router |
  17. | routes | |
  18. | status | ACTIVE |
  19. | tenant_id | ffc560f6a2604c3896df922115c6fc2a |
  20. +-------------------------+--------------------------------------+
  21. demorc@root@controller:~$neutron router-list
  22. +--------------------------------------+--------+-----------------------+
  23. | id | name | external_gateway_info |
  24. +--------------------------------------+--------+-----------------------+
  25. | 8770421b-2f3b-4d33-9acf-562b36b5b31b | router | null |
  26. +--------------------------------------+--------+-----------------------+

为路由器添加一个私网子网接口

  1. demorc@root@controller:~$neutron router-interface-add router selfservice
  2. Multiple subnet matches found for name 'selfservice', use an ID to be more specific.
  3. demorc@root@controller:~$neutron subnet-list
  4. +--------------------------------------+-------------+-----------------+----------------------------------------------------+
  5. | id | name | cidr | allocation_pools |
  6. +--------------------------------------+-------------+-----------------+----------------------------------------------------+
  7. | 68f14924-15c4-4b0d-bcfc-011fd5a6de12 | provider | 10.0.3.0/ | {"start": "10.0.3.50", "end": "10.0.3.254"} |
  8. | 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93 | selfservice | 192.168.56.0/ | {"start": "192.168.56.2", "end": "192.168.56.254"} |
  9. | ec079b98-a585-40c0-9b4c-340c943642eb | selfservice | 172.16.1.0/ | {"start": "172.16.1.2", "end": "172.16.1.254"} |
  10. +--------------------------------------+-------------+-----------------+----------------------------------------------------+
  11. demorc@root@controller:~$neutron router-interface-add router 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93
  12. Added interface 329ffea0-b8f2--a6b7-19556a312b75 to router router.

为路由器添加一个公有网络的网关

  1. demorc@root@controller:~$neutron router-gateway-set router provider
  2. Set gateway for router router

验证

列出网络命名空间

  1. demorc@root@controller:~$source adminrc
  2. adminrc@root@controller:~$ip netns
  3. qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b
  4. qdhcp-66eb76af-e111-4cae-adc6-2df95ad29faf
  5. qdhcp-b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e
  6. adminrc@root@controller:~$neutron router-port-list router

列出路由器上的端口来确定公网网关的IP地址

  1. adminrc@root@controller:~$neutron router-port-list router
  2. +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
  3. | id | name | mac_address | fixed_ips |
  4. +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
  5. | 329ffea0-b8f2--a6b7-19556a312b75 | | fa::3e::8e:3c | {"subnet_id": "9c8f506c-46bd-44d8-a8a5-e160bf2ddf93", "ip_address": "192.168.56.1"} |
  6. | a0b37442-a41b--b492-59f05637b371 | | fa::3e:::fd | {"subnet_id": "68f14924-15c4-4b0d-bcfc-011fd5a6de12", "ip_address": "10.0.3.51"} |
  7. +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+

ping测试

  1. adminrc@root@controller:~$ping -c 192.168.56.1
  2. PING 192.168.56.1 (192.168.56.1) () bytes of data.
  3. bytes from 192.168.56.1: icmp_seq= ttl= time=0.221 ms
  4. bytes from 192.168.56.1: icmp_seq= ttl= time=0.237 ms
  5.  
  6. --- 192.168.56.1 ping statistics ---
  7. packets transmitted, received, % packet loss, time 999ms
  8. rtt min/avg/max/mdev = 0.221/0.229/0.237/0.008 ms
  9. # 这里说明以下,上面创建了两个子网,一个192.168.56./24和172.16.1./,为路由器添加私网子网接口的时候的步骤中,我使用的是192.168.56./24这个网段,所以这里只能ping同192,不能ping同172

创建虚主机

  1. # 由于环境还是公有网络的环境,所以这里先删除之前创建m1.nano(可能更改其他规格也可以,我没尝试)
  2. adminrc@root@controller:~$openstack flavor list
  3. +----+-----------+-------+------+-----------+-------+-----------+
  4. | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
  5. +----+-----------+-------+------+-----------+-------+-----------+
  6. | | m1.nano | | | | | True |
  7. | | m1.tiny | | | | | True |
  8. | | m1.small | | | | | True |
  9. | | m1.medium | | | | | True |
  10. | | m1.large | | | | | True |
  11. | | m1.xlarge | | | | | True |
  12. +----+-----------+-------+------+-----------+-------+-----------+
  13. adminrc@root@controller:~$openstack flavor delete m1.nano
  14. adminrc@root@controller:~$openstack flavor list
  15. +----+-----------+-------+------+-----------+-------+-----------+
  16. | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
  17. +----+-----------+-------+------+-----------+-------+-----------+
  18. | | m1.tiny | | | | | True |
  19. | | m1.small | | | | | True |
  20. | | m1.medium | | | | | True |
  21. | | m1.large | | | | | True |
  22. | | m1.xlarge | | | | | True |
  23. +----+-----------+-------+------+-----------+-------+-----------+
  24. adminrc@root@controller:~$openstack flavor create --id --vcpus --ram --disk m1.nano
  25. +----------------------------+---------+
  26. | Field | Value |
  27. +----------------------------+---------+
  28. | OS-FLV-DISABLED:disabled | False |
  29. | OS-FLV-EXT-DATA:ephemeral | |
  30. | disk | |
  31. | id | |
  32. | name | m1.nano |
  33. | os-flavor-access:is_public | True |
  34. | ram | |
  35. | rxtx_factor | 1.0 |
  36. | swap | |
  37. | vcpus | |
  38. +----------------------------+---------+

生成一个键值对

  1. adminrc@root@controller:~$ssh-keygen
  2. Generating public/private rsa key pair.
  3. Enter file in which to save the key (/root/.ssh/id_rsa):
  4. /root/.ssh/id_rsa already exists.
  5. Overwrite (y/n)? y
  6. Enter passphrase (empty for no passphrase):
  7. Enter same passphrase again:
  8. Your identification has been saved in /root/.ssh/id_rsa.
  9. Your public key has been saved in /root/.ssh/id_rsa.pub.
  10. The key fingerprint is:
  11. :be::f6:be:9b::9b:db::e1:ee:1a:fb::b1 root@controller
  12. The key's randomart image is:
  13. +--[ RSA ]----+
  14. | |
  15. | . |
  16. | o . |
  17. | o . .|
  18. | S + ..|
  19. | + o ... |
  20. | . . ..+. |
  21. | .oE+. |
  22. | oOB*o |
  23. +-----------------+
  24. adminrc@root@controller:~$source demorc
  25. demorc@root@controller:~$openstack keypair create --public-key /root/.ssh/id_rsa.pub mykey
  26. +-------------+-------------------------------------------------+
  27. | Field | Value |
  28. +-------------+-------------------------------------------------+
  29. | fingerprint | :be::f6:be:9b::9b:db::e1:ee:1a:fb::b1 |
  30. | name | mykey |
  31. | user_id | c4de9fac882740838aa26e9119b30cb9 |
  32. +-------------+-------------------------------------------------+
  33. demorc@root@controller:~$openstack keypair list
  34. +-------+-------------------------------------------------+
  35. | Name | Fingerprint |
  36. +-------+-------------------------------------------------+
  37. | mykey | :be::f6:be:9b::9b:db::e1:ee:1a:fb::b1 |
  38. +-------+-------------------------------------------------+

增加安全组规则

  1. # 允许ICMP(ping)
  2. demorc@root@controller:~$openstack security group rule create --proto icmp default
  3. +-----------------------+--------------------------------------+
  4. | Field | Value |
  5. +-----------------------+--------------------------------------+
  6. | id | b76e25be-c17e-48b3-8bbd-8505c3637900 |
  7. | ip_protocol | icmp |
  8. | ip_range | 0.0.0.0/ |
  9. | parent_group_id | 82cd1a2f-5eaa--a6d4-480daf27cf3d |
  10. | port_range | |
  11. | remote_security_group | |
  12. +-----------------------+--------------------------------------+
  13. # 允许SSH访问
  14. demorc@root@controller:~$openstack security group rule create --proto tcp --dst-port default
  15. +-----------------------+--------------------------------------+
  16. | Field | Value |
  17. +-----------------------+--------------------------------------+
  18. | id | 32096d51-9e2a-45f2-a65a-27ef3c1bb2b5 |
  19. | ip_protocol | tcp |
  20. | ip_range | 0.0.0.0/ |
  21. | parent_group_id | 82cd1a2f-5eaa--a6d4-480daf27cf3d |
  22. | port_range | : |
  23. | remote_security_group | |
  24. +-----------------------+--------------------------------------+

开始创建实例

  1. demorc@root@controller:~$openstack flavor list
  2. +----+-----------+-------+------+-----------+-------+-----------+
  3. | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
  4. +----+-----------+-------+------+-----------+-------+-----------+
  5. | | m1.nano | | | | | True |
  6. | | m1.tiny | | | | | True |
  7. | | m1.small | | | | | True |
  8. | | m1.medium | | | | | True |
  9. | | m1.large | | | | | True |
  10. | | m1.xlarge | | | | | True |
  11. +----+-----------+-------+------+-----------+-------+-----------+
  12. demorc@root@controller:~$openstack image list
  13. +--------------------------------------+---------+--------+
  14. | ID | Name | Status |
  15. +--------------------------------------+---------+--------+
  16. | 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
  17. +--------------------------------------+---------+--------+
  18. demorc@root@controller:~$openstack network list
  19. +--------------------------------------+-------------+----------------------------------------------------------------------------+
  20. | ID | Name | Subnets |
  21. +--------------------------------------+-------------+----------------------------------------------------------------------------+
  22. | 66eb76af-e111-4cae-adc6-2df95ad29faf | selfservice | 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93, ec079b98-a585-40c0-9b4c-340c943642eb |
  23. | b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e | provider | 68f14924-15c4-4b0d-bcfc-011fd5a6de12 |
  24. +--------------------------------------+-------------+----------------------------------------------------------------------------+
  25. demorc@root@controller:~$openstack security group list
  26. +--------------------------------------+---------+------------------------+----------------------------------+
  27. | ID | Name | Description | Project |
  28. +--------------------------------------+---------+------------------------+----------------------------------+
  29. | 82cd1a2f-5eaa--a6d4-480daf27cf3d | default | Default security group | ffc560f6a2604c3896df922115c6fc2a |
  30. +--------------------------------------+---------+------------------------+----------------------------------+
  31. #确保以上几项都可用
  32. # flavor的话用的是m1.nano
  33. # net-id的话用的是selservice对应的ID
  34. demorc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=66eb76af-e111-4cae-adc6-2df95ad29faf --security-group default --key-name mykey selfservice-instance
  35. +--------------------------------------+------------------------------------------------+
  36. | Field | Value |
  37. +--------------------------------------+------------------------------------------------+
  38. | OS-DCF:diskConfig | MANUAL |
  39. | OS-EXT-AZ:availability_zone | |
  40. | OS-EXT-STS:power_state | |
  41. | OS-EXT-STS:task_state | scheduling |
  42. | OS-EXT-STS:vm_state | building |
  43. | OS-SRV-USG:launched_at | None |
  44. | OS-SRV-USG:terminated_at | None |
  45. | accessIPv4 | |
  46. | accessIPv6 | |
  47. | addresses | |
  48. | adminPass | uFD7TkvHjsax |
  49. | config_drive | |
  50. | created | --16T02::45Z |
  51. | flavor | m1.nano () |
  52. | hostId | |
  53. | id | 4c954e71-8e73-49e1-a67f-20c007d582d3 |
  54. | image | cirrors (39d73bcf-e60b-4caf--cca17de00d7e) |
  55. | key_name | mykey |
  56. | name | selfservice-instance |
  57. | os-extended-volumes:volumes_attached | [] |
  58. | progress | |
  59. | project_id | ffc560f6a2604c3896df922115c6fc2a |
  60. | properties | |
  61. | security_groups | [{u'name': u'default'}] |
  62. | status | BUILD |
  63. | updated | --16T02::46Z |
  64. | user_id | c4de9fac882740838aa26e9119b30cb9 |
  65. +--------------------------------------+------------------------------------------------+

查看实例状态

  1. demorc@root@controller:~$openstack server list
  2. +--------------------------------------+----------------------+--------+--------------------------+
  3. | ID | Name | Status | Networks |
  4. +--------------------------------------+----------------------+--------+--------------------------+
  5. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3 |
  6. +--------------------------------------+----------------------+--------+--------------------------+

使用nova list查看

  1. demorc@root@controller:~$nova list
  2. +--------------------------------------+----------------------+--------+------------+-------------+--------------------------+
  3. | ID | Name | Status | Task State | Power State | Networks |
  4. +--------------------------------------+----------------------+--------+------------+-------------+--------------------------+
  5. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3 |
  6. +--------------------------------------+----------------------+--------+------------+-------------+--------------------------+

关闭、启动、删除实例

  1. demorc@root@controller:~$openstack server list
  2. +--------------------------------------+----------------------+---------+--------------------------+
  3. | ID | Name | Status | Networks |
  4. +--------------------------------------+----------------------+---------+--------------------------+
  5. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | SHUTOFF | selfservice=192.168.56.3 |
  6. +--------------------------------------+----------------------+---------+--------------------------+
  7. demorc@root@controller:~$openstack server list +--------------------------------------+----------------------+--------+--------------------------+
  8. | ID | Name | Status | Networks |
  9. +--------------------------------------+----------------------+--------+--------------------------+
  10. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3 |
  11. +--------------------------------------+----------------------+--------+--------------------------+
  12. demorc@root@controller:~$openstack server stop 4c954e71-8e73-49e1-a67f-20c007d582d3
  13. demorc@root@controller:~$openstack server delete 4c954e71-8e73-49e1-a67f-20c007d582d3

使用虚拟控制台访问实例

  1. demorc@root@controller:~$openstack console url show selfservice-instance
  2. +-------+------------------------------------------------------------------------------------+
  3. | Field | Value |
  4. +-------+------------------------------------------------------------------------------------+
  5. | type | novnc |
  6. | url | http://192.168.56.10:6080/vnc_auto.html?token=82177d68-c9fb-4c3c-85d6-6d42db50c864 |
  7. +-------+------------------------------------------------------------------------------------+

浏览器直接粘贴上面的url即可

由于是单节点安装,所以这里想要ping实例的话需要

  1. demorc@root@controller:~$ip netns
  2. qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b #复制此行
  3. qdhcp-66eb76af-e111-4cae-adc6-2df95ad29faf
  4. qdhcp-b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e
  5. demorc@root@controller:~$ip netns exec qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b ip a | grep "inet"
  6. inet 127.0.0.1/ scope host lo
  7. inet6 ::/ scope host
  8. inet 192.168.56.1/ brd 192.168.56.255 scope global qr-329ffea0-b8
  9. inet6 fe80::f816:3eff:fe36:8e3c/ scope link
  10. inet 10.0.3.51/ brd 10.0.3.255 scope global qg-a0b37442-a4
  11. inet6 fe80::f816:3eff:fe02:33fd/ scope link
  12. demorc@root@controller:~$ip netns exec qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b ping 192.168.56.3
  13. PING 192.168.56.3 (192.168.56.3) () bytes of data.
  14. bytes from 192.168.56.3: icmp_seq= ttl= time=8.95 ms
  15. bytes from 192.168.56.3: icmp_seq= ttl= time=0.610 ms
  16. bytes from 192.168.56.3: icmp_seq= ttl= time=0.331 ms
  17. bytes from 192.168.56.3: icmp_seq= ttl= time=0.344 ms
  18. ^C
  19. --- 192.168.56.3 ping statistics ---
  20. packets transmitted, received, % packet loss, time 3000ms
  21. rtt min/avg/max/mdev = 0.331/2.560/8.955/3.693 ms

创建浮动IP,用来远程连接

  1. demorc@root@controller:~$source adminrc
  2. adminrc@root@controller:~$openstack ip floating create provider
  3. +-------------+--------------------------------------+
  4. | Field | Value |
  5. +-------------+--------------------------------------+
  6. | fixed_ip | None |
  7. | id | 00315ef2--42ae-825b-0f94ed098de8 |
  8. | instance_id | None |
  9. | ip | 10.0.3.52 |
  10. | pool | provider |
  11. +-------------+--------------------------------------+

为实例分配浮动IP

查看浮动IP

  1. adminrc@root@controller:~$openstack ip floating list
  2. +--------------------------------------+---------------------+------------------+------+
  3. | ID | Floating IP Address | Fixed IP Address | Port |
  4. +--------------------------------------+---------------------+------------------+------+
  5. | 00315ef2--42ae-825b-0f94ed098de8 | 10.0.3.52 | None | None |
  6. +--------------------------------------+---------------------+------------------+------+
  7. 为实例添加浮动IP
  8. adminrc@root@controller:~$openstack ip floating add 10.0.3.52 4c954e71-8e73-49e1-a67f-20c007d582d3
  9. Unable to associate floating IP 10.0.3.52 to fixed IP 192.168.56.3 for instance 4c954e71-8e73-49e1-a67f-20c007d582d3. Error: Bad floatingip request: Port 454451d2-6c5d-411c-8ad0-d6f5908259a6 is associated with a different tenant than Floating IP 00315ef2--42ae-825b-0f94ed098de8 and therefore cannot be bound..
  10. Neutron server returns request_ids: ['req-58f751d8-ab56-41d3-bb99-de2307ed9c67'] (HTTP ) (Request-ID: req-330493bd-f040-4b24-a08b-8384b162ea60)
  11. # 报错原因是admirc用户创建的floating ip是不能绑定给demorc用户实例
  12. # 解决办法,删掉floating IP 使用demorc用户重新创建floating IP
  13. adminrc@root@controller:~$ openstack ip floating list
  14. +--------------------------------------+---------------------+------------------+------+
  15. | ID | Floating IP Address | Fixed IP Address | Port |
  16. +--------------------------------------+---------------------+------------------+------+
  17. | 00315ef2--42ae-825b-0f94ed098de8 | 10.0.3.52 | None | None |
  18. +--------------------------------------+---------------------+------------------+------+
  19. adminrc@root@controller:~$openstack ip floating delete 00315ef2--42ae-825b-0f94ed098de8
  20. adminrc@root@controller:~$openstack ip floating list
  21.  
  22. adminrc@root@controller:~$source demorc
  23. demorc@root@controller:~$openstack ip floating create provider
  24. +-------------+--------------------------------------+
  25. | Field | Value |
  26. +-------------+--------------------------------------+
  27. | fixed_ip | None |
  28. | id | 72d37905-4e1d-45a4-a010-a041968a0220 |
  29. | instance_id | None |
  30. | ip | 10.0.3.53 |
  31. | pool | provider |
  32. +-------------+--------------------------------------+
  33. demorc@root@controller:~$openstack ip floating add 10.0.3.53 selfservice-instance
  34. demorc@root@controller:~$openstack server list
  35. +--------------------------------------+----------------------+--------+-------------------------------------+
  36. | ID | Name | Status | Networks |
  37. +--------------------------------------+----------------------+--------+-------------------------------------+
  38. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3, 10.0.3.53 |
  39. +--------------------------------------+----------------------+--------+-------------------------------------+

测试浮动IP

  1. demorc@root@controller:~$ping -c 10.0.3.53
  2. PING 10.0.3.53 (10.0.3.53) () bytes of data.
  3. bytes from 10.0.3.53: icmp_seq= ttl= time=3.40 ms
  4. bytes from 10.0.3.53: icmp_seq= ttl= time=0.415 ms
  5.  
  6. --- 10.0.3.53 ping statistics ---
  7. packets transmitted, received, % packet loss, time 1001ms
  8. rtt min/avg/max/mdev = 0.415/1.912/3.409/1.497 ms
  9. demorc@root@controller:~$su -
  10. root@controller:~# ssh cirros@10.0.3.53
  11. The authenticity of host '10.0.3.53 (10.0.3.53)' can't be established.
  12. RSA key fingerprint is e2::a9:e6:::a9:db::cb::5c::9a:4e:c7.
  13. Are you sure you want to continue connecting (yes/no)? yes
  14. Warning: Permanently added '10.0.3.53' (RSA) to the list of known hosts.
  15. $ ifconfig
  16. eth0 Link encap:Ethernet HWaddr FA::3E::6D:
  17. inet addr:192.168.56.3 Bcast:192.168.56.255 Mask:255.255.255.0
  18. inet6 addr: fe80::f816:3eff:fe30:6d63/ Scope:Link
  19. UP BROADCAST RUNNING MULTICAST MTU: Metric:
  20. RX packets: errors: dropped: overruns: frame:
  21. TX packets: errors: dropped: overruns: carrier:
  22. collisions: txqueuelen:
  23. RX bytes: (15.0 KiB) TX bytes: (14.6 KiB)
  24.  
  25. lo Link encap:Local Loopback
  26. inet addr:127.0.0.1 Mask:255.0.0.0
  27. inet6 addr: ::/ Scope:Host
  28. UP LOOPBACK RUNNING MTU: Metric:
  29. RX packets: errors: dropped: overruns: frame:
  30. TX packets: errors: dropped: overruns: carrier:
  31. collisions: txqueuelen:
  32. RX bytes: (0.0 B) TX bytes: (0.0 B)
  33.  
  34. $ ping -c www.qq.com
  35. PING www.qq.com (61.129.7.47): data bytes
  36. bytes from 61.129.7.47: seq= ttl= time=7.461 ms
  37. bytes from 61.129.7.47: seq= ttl= time=6.463 ms
  38.  
  39. --- www.qq.com ping statistics ---
  40. packets transmitted, packets received, % packet loss
  41. round-trip min/avg/max = 6.463/6.962/7.461 ms
  42. $ exit
  43. Connection to 10.0.3.53 closed.

浮动IP的意义:当用户创建的实例处于私有网络的时候,此时又想让实例访问外网,这就需要通过绑定floating IP来实现私有网络中的实例访问公网。

  1. demorc@root@controller:~$nova list
  2. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  3. | ID | Name | Status | Task State | Power State | Networks |
  4. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  5. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3, 10.0.3.53 |
  6. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  7. demorc@root@controller:~$ping -c 10.0.3.53
  8. PING 10.0.3.53 (10.0.3.53) () bytes of data.
  9. bytes from 10.0.3.53: icmp_seq= ttl= time=3.31 ms
  10. bytes from 10.0.3.53: icmp_seq= ttl= time=0.550 ms
  11.  
  12. --- 10.0.3.53 ping statistics ---
  13. packets transmitted, received, % packet loss, time 1001ms
  14. rtt min/avg/max/mdev = 0.550/1.934/3.319/1.385 ms
  15. demorc@root@controller:~$ssh -i /root/.ssh/id_rsa cirros@10.0.3.53
  16. $ ifconfig
  17. eth0 Link encap:Ethernet HWaddr FA::3E::6D:
  18. inet addr:192.168.56.3 Bcast:192.168.56.255 Mask:255.255.255.0
  19. inet6 addr: fe80::f816:3eff:fe30:6d63/ Scope:Link
  20. UP BROADCAST RUNNING MULTICAST MTU: Metric:
  21. RX packets: errors: dropped: overruns: frame:
  22. TX packets: errors: dropped: overruns: carrier:
  23. collisions: txqueuelen:
  24. RX bytes: (29.8 KiB) TX bytes: (26.4 KiB)
  25.  
  26. lo Link encap:Local Loopback
  27. inet addr:127.0.0.1 Mask:255.0.0.0
  28. inet6 addr: ::/ Scope:Host
  29. UP LOOPBACK RUNNING MTU: Metric:
  30. RX packets: errors: dropped: overruns: frame:
  31. TX packets: errors: dropped: overruns: carrier:
  32. collisions: txqueuelen:
  33. RX bytes: (0.0 B) TX bytes: (0.0 B)
  34.  
  35. $ exit
  36. Connection to 10.0.3.53 closed.

openstack安装dashboard

  1. root@controller:~# apt-get install -y openstack-dashboard

配置dashboard

  1. root@controller:~# cp /etc/openstack-dashboard/local_settings.py{,.bak}
  2. root@controller:~# vim /etc/openstack-dashboard/local_settings.py
  3. OPENSTACK_HOST = "controller"
  4. OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
  5. OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
  6.  
  7. OPENSTACK_HOST = "controller"
  8. ALLOWED_HOSTS = '*'
  9.  
  10. SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
  11.  
  12. CACHES = {
  13. 'default': {
  14. 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
  15. 'LOCATION': '10.0.3.10:11211',
  16. }
  17. }
  18.  
  19. OPENSTACK_API_VERSIONS = {
  20. "data-processing": 1.1,
  21. "identity": ,
  22. "volume": ,
  23. "compute": ,
  24. }
  25.  
  26. OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  27. OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
  28.  
  29. OPENSTACK_NEUTRON_NETWORK = {
  30. 'enable_router': True,
  31. 'enable_quotas': True,
  32. 'enable_ipv6': True,
  33. 'enable_distributed_router': False,
  34. 'enable_ha_router': False,
  35. 'enable_lb': True,
  36. 'enable_firewall': True,
  37. 'enable_vpn': True,
  38. 'enable_fip_topology_check': True,
  39.  
  40. 'default_ipv4_subnet_pool_label': None,
  41.  
  42. 'default_ipv6_subnet_pool_label': None,
  43. 'profile_support': None,
  44. 'supported_provider_types': ['*'],
  45. 'supported_vnic_types': ['*'],
  46. }
  47.  
  48. TIME_ZONE = "Asia/Shanghai"

重启apache2

  1. root@controller:~# service apache2 reload
  2. * Reloading web server apache2 *
  3. root@controller:~# echo $?

浏览器测试

  1. # 如果不记得admin密码可以查看这个文件
  2. openstack@controller:~$ cat adminrc
  3. unset OS_TOKEN
  4. unset OS_URL
  5. unset OS_IDENTITY_API_VERSION
  6.  
  7. export OS_PROJECT_DOMAIN_NAME=default
  8. export OS_USER_DOMAIN_NAME=default
  9. export OS_PROJECT_NAME=admin
  10. export OS_USERNAME=admin
  11. export OS_PASSWORD=admin
  12. export OS_AUTH_URL=http://controller:35357/v3
  13. export OS_IDENTITY_API_VERSION=
  14. export OS_IMAGE_API_VERSION=
  15. export PS1="adminrc@\u@\h:\w\$"

验证demo用户

使用demo用户查看网络拓扑

查看相关信息

查看routers的信息

使用admin查看相关信息

安装cinder

首先需要给虚拟机添加一块新硬盘,添加步骤不再演示,一路默认下一步即可。

开始准备Cinder安装环境

  1. root@controller:~# mysql -uroot -p123456
  2. Welcome to the MariaDB monitor. Commands end with ; or \g.
  3. Your MariaDB connection id is
  4. Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu)
  5.  
  6. Copyright (c) , , Oracle, MariaDB Corporation Ab and others.
  7.  
  8. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  9.  
  10. MariaDB [(none)]> create database cinder;
  11. Query OK, row affected (0.00 sec)
  12.  
  13. MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'lcoalhost' identified by '';
  14. Query OK, rows affected (0.00 sec)
  15.  
  16. MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'%' identified by '';
  17. Query OK, rows affected (0.00 sec)
  18.  
  19. MariaDB [(none)]> \q
  20. Bye

切换到adminrc环境

  1. # 创建一个cinder用户
  2. root@controller:~# source adminrc
  3. adminrc@root@controller:~$openstack user create --domain default --password cinder cinder
  4. +-----------+----------------------------------+
  5. | Field | Value |
  6. +-----------+----------------------------------+
  7. | domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
  8. | enabled | True |
  9. | id | 74153e9abf694f2f9ecd2203b71e2529 |
  10. | name | cinder |
  11. +-----------+----------------------------------+
  12. 添加admin角色到cinder用户上
  13. adminrc@root@controller:~$openstack role add --project service --user cinder admin
  14. 创建 cinder cinderv2 服务实体
  15. adminrc@root@controller:~$openstack service create --name cinder --description "OpenStack Block Storage" volume
  16. +-------------+----------------------------------+
  17. | Field | Value |
  18. +-------------+----------------------------------+
  19. | description | OpenStack Block Storage |
  20. | enabled | True |
  21. | id | 3f13455162a145e28096ce110be1213e |
  22. | name | cinder |
  23. | type | volume |
  24. +-------------+----------------------------------+
  25. adminrc@root@controller:~$openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
  26. +-------------+----------------------------------+
  27. | Field | Value |
  28. +-------------+----------------------------------+
  29. | description | OpenStack Block Storage |
  30. | enabled | True |
  31. | id | 9fefead9767048e1b632bb7026c55380 |
  32. | name | cinderv2 |
  33. | type | volumev2 |
  34. +-------------+----------------------------------+

创建块设备存储服务API入口点

  1. dminrc@root@controller:~$openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
  2. +--------------+-----------------------------------------+
  3. | Field | Value |
  4. +--------------+-----------------------------------------+
  5. | enabled | True |
  6. | id | d45e4cd8fb7945968d5e644a74dc62e3 |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 3f13455162a145e28096ce110be1213e |
  11. | service_name | cinder |
  12. | service_type | volume |
  13. | url | http://controller:8776/v1/%(tenant_id)s |
  14. +--------------+-----------------------------------------+
  15. adminrc@root@controller:~$openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
  16. +--------------+-----------------------------------------+
  17. | Field | Value |
  18. +--------------+-----------------------------------------+
  19. | enabled | True |
  20. | id | fcf99a2a72c94d81b472f4c75ea952c8 |
  21. | interface | internal |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | 3f13455162a145e28096ce110be1213e |
  25. | service_name | cinder |
  26. | service_type | volume |
  27. | url | http://controller:8776/v1/%(tenant_id)s |
  28. +--------------+-----------------------------------------+
  29. adminrc@root@controller:~$openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
  30. +--------------+-----------------------------------------+
  31. | Field | Value |
  32. +--------------+-----------------------------------------+
  33. | enabled | True |
  34. | id | e611a9caabf640dfbcd93b7b750180da |
  35. | interface | admin |
  36. | region | RegionOne |
  37. | region_id | RegionOne |
  38. | service_id | 3f13455162a145e28096ce110be1213e |
  39. | service_name | cinder |
  40. | service_type | volume |
  41. | url | http://controller:8776/v1/%(tenant_id)s |
  42. +--------------+-----------------------------------------+
  43. adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
  44. +--------------+-----------------------------------------+
  45. | Field | Value |
  46. +--------------+-----------------------------------------+
  47. | enabled | True |
  48. | id | ecd1248c63844473aba74c6af3554a00 |
  49. | interface | admin |
  50. | region | RegionOne |
  51. | region_id | RegionOne |
  52. | service_id | 9fefead9767048e1b632bb7026c55380 |
  53. | service_name | cinderv2 |
  54. | service_type | volumev2 |
  55. | url | http://controller:8776/v2/%(tenant_id)s |
  56. +--------------+-----------------------------------------+
  57. adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
  58. +--------------+-----------------------------------------+
  59. | Field | Value |
  60. +--------------+-----------------------------------------+
  61. | enabled | True |
  62. | id | 862a463ef202433e95e2e1c80030af59 |
  63. | interface | public |
  64. | region | RegionOne |
  65. | region_id | RegionOne |
  66. | service_id | 9fefead9767048e1b632bb7026c55380 |
  67. | service_name | cinderv2 |
  68. | service_type | volumev2 |
  69. | url | http://controller:8776/v2/%(tenant_id)s |
  70. +--------------+-----------------------------------------+
  71. adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
  72. +--------------+-----------------------------------------+
  73. | Field | Value |
  74. +--------------+-----------------------------------------+
  75. | enabled | True |
  76. | id | 89fcc47679e94213a0ec2d8eabed95db |
  77. | interface | internal |
  78. | region | RegionOne |
  79. | region_id | RegionOne |
  80. | service_id | 9fefead9767048e1b632bb7026c55380 |
  81. | service_name | cinderv2 |
  82. | service_type | volumev2 |
  83. | url | http://controller:8776/v2/%(tenant_id)s |
  84. +--------------+-----------------------------------------+

安装安全配置组件

  1. adminrc@root@controller:~$apt-get install -y cinder-api cinder-scheduler

开始配置cinder

  1. adminrc@root@controller:~$cp /etc/cinder/cinder.conf{,.bak}
  2. adminrc@root@controller:~$vim /etc/cinder/cinder.conf
  3. [DEFAULT]
  4. rootwrap_config = /etc/cinder/rootwrap.conf
  5. api_paste_confg = /etc/cinder/api-paste.ini
  6. iscsi_helper = tgtadm
  7. volume_name_template = volume-%s
  8. volume_group = cinder-volumes
  9. verbose = True
  10. auth_strategy = keystone
  11. state_path = /var/lib/cinder
  12. lock_path = /var/lock/cinder
  13. volumes_dir = /var/lib/cinder/volumes
  14. auth_strategy = keystone
  15. rpc_backend = rabbit
  16. my_ip = 10.0.3.10
  17.  
  18. [database]
  19.  
  20. connection = mysql+pymysql://cinder:123456@controller/cinder
  21.  
  22. [keystone_authtoken]
  23.  
  24. auth_uri = http://controller:5000
  25. auth_url = http://controller:35357
  26. memcached_servers = controller:
  27. auth_type = password
  28. project_domain_name = default
  29. user_domain_name = default
  30. project_name = service
  31. username = cinder
  32. password = cinder
  33.  
  34. [oslo_messaging_rabbit]
  35.  
  36. rabbit_host = controller
  37. rabbit_userid = openstack
  38. rabbit_password =
  39. [oslo_concurrency]
  40. lock_path = /var/lib/cinder/tmp

确认配置无误后,同步数据库

  1. adminrc@root@controller:~$su -s /bin/bash -c "cinder-manage db sync" cinder
  2. Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
  3. -- ::23.140 WARNING py.warnings [-] /usr/lib/python2./dist-packages/oslo_db/sqlalchemy/enginefacade.py:: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  4. exception.NotSupportedWarning
  5.  
  6. -- ::23.203 INFO migrate.versioning.api [-] -> ...
  7. .........
  8. -- ::25.097 INFO migrate.versioning.api [-] done

配置计算节点使用块设备存储

  1. adminrc@root@controller:~$cp /etc/nova/nova.conf{,.private}
  2. adminrc@root@controller:~$vim /etc/nova/nova.conf
  3. # 文件末尾添加
  4. [cinder]
  5. os_region_name = RegionOne
  6. # 保存退出后,重启nova-api和cinder服务
  7. adminrc@root@controller:~$service nova-api restart
  8. nova-api stop/waiting
  9. nova-api start/running, process
  10. adminrc@root@controller:~$service cinder-
  11. cinder-api cinder-scheduler
  12. adminrc@root@controller:~$ls /etc/init.d/ | grep cinder
  13. cinder-api
  14. cinder-scheduler
  15. adminrc@root@controller:~$ls /etc/init.d/ | grep cinder | xargs -i service {} restart
  16. cinder-api stop/waiting
  17. cinder-api start/running, process
  18. cinder-scheduler stop/waiting
  19. cinder-scheduler start/running, process

安装lvm2

  1. adminrc@root@controller:~$apt-get install -y lvm2

创建LVM物理卷、卷组

  1. adminrc@root@controller:~$pvcreate /dev/sdb
  2. Physical volume "/dev/sdb" successfully created
  3. adminrc@root@controller:~$vgcreate cinder-volumes /dev/sdb
  4. Volume group "cinder-volumes" successfully created

配置

  1. adminrc@root@controller:~$cp /etc/lvm/lvm.conf{,.bak}
  2. adminrc@root@controller:~$vim /etc/lvm/lvm.conf
  3.  
  4. filter = [ "a/sdb/", "r/.*/"] #将原值修改为这个值

安装安全组件

  1. adminrc@root@controller:~$apt-get install cinder-volume

配置cinder.conf

  1. adminrc@root@controller:~$cat /etc/cinder/cinder.conf
  2. [DEFAULT]
  3. rootwrap_config = /etc/cinder/rootwrap.conf
  4. api_paste_confg = /etc/cinder/api-paste.ini
  5. iscsi_helper = tgtadm
  6. volume_name_template = volume-%s
  7. volume_group = cinder-volumes
  8. verbose = True
  9. auth_strategy = keystone
  10. state_path = /var/lib/cinder
  11. lock_path = /var/lock/cinder
  12. volumes_dir = /var/lib/cinder/volumes
  13. auth_strategy = keystone
  14. rpc_backend = rabbit
  15. my_ip = 10.0.3.10
  16. enabled_backends = lvm
  17. glance_api_servers = http://controller:9292
  18.  
  19. [database]
  20.  
  21. connection = mysql+pymysql://cinder:123456@controller/cinder
  22.  
  23. [keystone_authtoken]
  24.  
  25. auth_uri = http://controller:5000
  26. auth_url = http://controller:35357
  27. memcached_servers = controller:
  28. auth_type = password
  29. project_domain_name = default
  30. user_domain_name = default
  31. project_name = service
  32. username = cinder
  33. password = cinder
  34.  
  35. [oslo_messaging_rabbit]
  36.  
  37. rabbit_host = controller
  38. rabbit_userid = openstack
  39. rabbit_password =
  40. [oslo_concurrency]
  41. lock_path = /var/lib/cinder/tmp
  42.  
  43. [lvm]
  44. volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
  45. volume_group = cinder-volumes
  46. iscsi_protocol = iscsi
  47. iscsi_helper = tgtadm
  48.  
  49. [oslo_concurrency]
  50. lock_path = /var/lib/cinder/tmp

重启服务

  1. adminrc@root@controller:~$service tgt restart
  2. tgt stop/waiting
  3. tgt start/running, process
  4. adminrc@root@controller:~$service cinder-volume restart
  5. cinder-volume stop/waiting
  6. cinder-volume start/running, process

验证

  1. adminrc@root@controller:~$cinder service-list
  2. +------------------+----------------+------+---------+-------+----------------------------+-----------------+
  3. | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  4. +------------------+----------------+------+---------+-------+----------------------------+-----------------+
  5. | cinder-scheduler | controller | nova | enabled | up | --17T03::00.000000 | - |
  6. | cinder-volume | controller | nova | enabled | down | --17T03::52.000000 | - |
  7. | cinder-volume | controller@lvm | nova | enabled | up | --17T03::01.000000 | - |
  8. +------------------+----------------+------+---------+-------+----------------------------+-----------------+

# 不知道为什么一个状态是down

切换到demo用户

  1. adminrc@root@controller:~$source demorc
  2. demorc@root@controller:~$openstack volume create --size volume1
  3. +---------------------+--------------------------------------+
  4. | Field | Value |
  5. +---------------------+--------------------------------------+
  6. | attachments | [] |
  7. | availability_zone | nova |
  8. | bootable | false |
  9. | consistencygroup_id | None |
  10. | created_at | --17T04::56.366573 |
  11. | description | None |
  12. | encrypted | False |
  13. | id | 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 |
  14. | multiattach | False |
  15. | name | volume1 |
  16. | properties | |
  17. | replication_status | disabled |
  18. | size | |
  19. | snapshot_id | None |
  20. | source_volid | None |
  21. | status | creating |
  22. | type | None |
  23. | updated_at | None |
  24. | user_id | c4de9fac882740838aa26e9119b30cb9 |
  25. +---------------------+--------------------------------------+
  26. demorc@root@controller:~$openstack volume list
  27. +--------------------------------------+--------------+-----------+------+-------------+
  28. | ID | Display Name | Status | Size | Attached to |
  29. +--------------------------------------+--------------+-----------+------+-------------+
  30. | 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1 | available | | |
  31. +--------------------------------------+--------------+-----------+------+-------------+

添加卷到一个实例上

  1. demorc@root@controller:~$nova list
  2. +--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
  3. | ID | Name | Status | Task State | Power State | Networks |
  4. +--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
  5. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | SHUTOFF | - | Shutdown | selfservice=192.168.56.3, 10.0.3.53 |
  6. +--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
  7. demorc@root@controller:~$nova start 4c954e71-8e73-49e1-a67f-20c007d582d3
  8. Request to start server 4c954e71-8e73-49e1-a67f-20c007d582d3 has been accepted.
  9. demorc@root@controller:~$nova list
  10. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  11. | ID | Name | Status | Task State | Power State | Networks |
  12. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  13. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3, 10.0.3.53 |
  14. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  15. demorc@root@controller:~$ping -c 10.0.3.53
  16. PING 10.0.3.53 (10.0.3.53) () bytes of data.
  17. bytes from 10.0.3.53: icmp_seq= ttl= time=9.45 ms
  18. bytes from 10.0.3.53: icmp_seq= ttl= time=0.548 ms
  19. demorc@openstack@controller:~$nova list
  20. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  21. | ID | Name | Status | Task State | Power State | Networks |
  22. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  23. | 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3, 10.0.3.53 |
  24. +--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
  25. demorc@openstack@controller:~$openstack volume list
  26. +--------------------------------------+--------------+-----------+------+-------------+
  27. | ID | Display Name | Status | Size | Attached to |
  28. +--------------------------------------+--------------+-----------+------+-------------+
  29. | 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1 | available | | |
  30. +--------------------------------------+--------------+-----------+------+-------------+
  31. # 复制下来实例的ID和volume1的ID
  32. demorc@root@controller:~$openstack server add volume 4c954e71-8e73-49e1-a67f-20c007d582d3 240ee7be-49bb-48bc-8bb3-1c44196b5ad9
  33. 再次查看volume1的状态,可以看出正在使用
  34. demorc@root@controller:~$openstack volume list
  35. +--------------------------------------+--------------+--------+------+-----------------------------------------------+
  36. | ID | Display Name | Status | Size | Attached to |
  37. +--------------------------------------+--------------+--------+------+-----------------------------------------------+
  38. | 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1 | in-use | | Attached to selfservice-instance on /dev/vdb |
  39. +--------------------------------------+--------------+--------+------+-----------------------------------------------+

创建并格式化新创建的磁盘

  1. demorc@root@controller:~$ssh cirros@10.0.3.53
  2. $ sudo -s
  3. $ fdisk -l
  4.  
  5. Disk /dev/vda: MB, bytes
  6. heads, sectors/track, cylinders, total sectors
  7. Units = sectors of * = bytes
  8. Sector size (logical/physical): bytes / bytes
  9. I/O size (minimum/optimal): bytes / bytes
  10. Disk identifier: 0x00000000
  11.  
  12. Device Boot Start End Blocks Id System
  13. /dev/vda1 * + Linux
  14.  
  15. Disk /dev/vdb: MB, bytes
  16. heads, sectors/track, cylinders, total sectors
  17. Units = sectors of * = bytes
  18. Sector size (logical/physical): bytes / bytes
  19. I/O size (minimum/optimal): bytes / bytes
  20. Disk identifier: 0x00000000
  21.  
  22. Disk /dev/vdb doesn't contain a valid partition table
  23. $ mkfs.ext4 /dev/sdb
  24. $ mkfs.ext4 /dev/vdb
  25. mke2fs 1.42. (-Mar-)
  26. Filesystem label=
  27. OS type: Linux
  28. Block size= (log=)
  29. Fragment size= (log=)
  30. Stride= blocks, Stripe width= blocks
  31. inodes, blocks
  32. blocks (5.00%) reserved for the super user
  33. First data block=
  34. Maximum filesystem blocks=
  35. block groups
  36. blocks per group, fragments per group
  37. inodes per group
  38. Superblock backups stored on blocks:
  39. , , ,
  40.  
  41. Allocating group tables: done
  42. Writing inode tables: done
  43. Creating journal ( blocks): done
  44. Writing superblocks and filesystem accounting information:done
  45. $ ls /mnt/
  46. lost+found
  47. $ touch /mnt/test
  48. $ ls /mnt/
  49. lost+found test
  50. $ exit
  51. $ exit
  52. Connection to 10.0.3.53 closed.
  53. demorc@root@controller:~$exit
  54. exit

(仅供学习使用,如有侵权请留言,我会第一时间删除相关内容)

Win10+VirtualBox+Openstack Mitaka的更多相关文章

  1. OpenStack Mitaka安装

    http://egon09.blog.51cto.com/9161406/1839667 前言: openstack的部署非常简单,简单的前提建立在扎实的理论功底,本人一直觉得,玩技术一定是理论指导实 ...

  2. openstack项目【day24】:OpenStack mitaka部署

    前言: openstack的部署非常简单,简单的前提建立在扎实的理论功底,本人一直觉得,玩技术一定是理论指导实践,网上遍布个种搭建方法都可以实现一个基本的私有云环境,但是诸位可曾发现,很多配置都是重复 ...

  3. OpenStack Mitaka/Newton/Ocata/Pike 各版本功能贴整理

    逝者如斯,刚接触OpenStack的时候还只是第9版本IceHouse.前几天也看到了刘大咖更新了博客,翻译了Mirantis博客文章<OpenStack Pike 版本中的 53 个新功能盘点 ...

  4. OpenStack Mitaka HA部署方案(随笔)

    [Toc] https://github.com/wanstack/AutoMitaka # 亲情奉献安装openstack HA脚本 使用python + shell,完成了基本的核心功能(纯二层的 ...

  5. OpenStack Mitaka 版本中的 domain 和 admin

    OpenStack 的 Keystone V3 中引入了 Domain 的概念.引入这个概念后,关于 admin 这个role 的定义就变得复杂了起来. 本文测试环境是社区 Mitaka 版本. 1. ...

  6. 在ubuntu14.04上安装openstack mitaka

    最近在工作环境安装部署了juno版本,在GE口测试网络性能不太满意,发现mitaka版本支持ovs-dpdk,于是抽时间安装实验一番. 参考官网的安装文档,先准备将mitaka版本安装好再配置ovs. ...

  7. 云计算之阿里仓库停止openstack mitaka源报错“No package centos-release-openstack-mitaka available.”

    之前学习了一个月的openstack的mitaka版本,写完脚本放置一段时间,最近准备正式部署突然发现 No package centos-release-openstack-mitaka avail ...

  8. Openstack Mitaka 负载均衡 LoadBalancerv2

    ​ 最近研究了一下Openstack负载均衡,yum源和源码级别的安装都尝试成功了.网上有很多文章都是LoadBalancerv1,这个已经被放弃了.所以写一下自己是如何使用LoadBalancerv ...

  9. CentOS阿里仓库停止openstack mitaka源服务报错------“No package centos-release-openstack-mitaka available.”

    之前学习了一个月的openstack的mitaka版本,部署完后放置一段时间,最近准备正式部署突然发现“No package centos-release-openstack-mitaka avail ...

随机推荐

  1. bzoj 4319: Suffix reconstruction 后缀数组+构造

    题目大意 给定后缀数组sa,要求构造出满足sa数组的字符串.或输出无解\(n\leq 5*10^5\) 题解 我们按照字典序来考虑每个后缀 对于\(Suffix(sa[i])\)和\(Suffix(s ...

  2. Scala学习——集合的使用和“_”的一些使用(中)

    1.空格加_可以表示函数的原型 命令行代码: scala> def fun1(name:String){println(name)} fun1: (name: String)Unit scala ...

  3. unix-like 图形服务组件(ubuntu)

    ubuntu18.04 准备回归wayland, 因为手机平板的ubuntu无人使用,开发成本太高.所以弃用Unity8, 重新改用 waylang 的GNOME sudo systemctl dis ...

  4. Docker入门(五):Swarms

    这个<Docker入门系列>文档,是根据Docker官网(https://docs.docker.com)的帮助文档大致翻译而成.主要是作为个人学习记录.有错误的地方,Robin欢迎大家指 ...

  5. jQuery contextMenu使用

    地址:jQuery contextMenu 需要以下文件: jquery.contextMenu.css jquery.min.css jquery.contextMenu.js jquery.ui. ...

  6. <正则吃饺子> :关于微信支付的简单总结说明(二)

    关于微信退款 一.官方文档 申请退款:https://pay.weixin.qq.com/wiki/doc/api/app/app.php?chapter=9_4&index=6 二.退款流程 ...

  7. 【241】◀▶IEW-Unit06

    Unit 6 Advertising 多幅柱子在一幅图中的写作技巧 1.Model1图片分析 The bar chart contains information about the amount o ...

  8. ie8兼容rgba的方法

    现在做个网页还得考虑ie8,只想说:尼玛! 但是没办法,屈于淫威也得弄. 首先说下rgba的含义吧,rgba,r代表red,g代表green,b代表blue,a代表透明度. filter:progid ...

  9. git for eclipse 如何取消误操作的忽略(ignore)操作

    直接删除ignore文件即可.如下显示: 原文引用:https://blog.csdn.net/exceptionss/article/details/79082601

  10. js中push(),pop(),unshift(),shift()的用法

    js中push(),pop(),unshift(),shift()的用法小结   1.push().pop()和unshift().shift() 这两组同为对数组的操作,并且会改变数组的本身的长度及 ...