本节讨论 nova-compute ,并详细分析 instance 部署的全过程。
 
nova-compute 在计算节点上运行,负责管理节点上的instance 。OpenStack 对instance 的操作, 最后都是交给 nova-compute 来完成的。nova-compute 来完成的。nova-compute 与 Hypervisor 一起实现 OpenStack 对 instance 声明周期的管理。
 
通过 Driver 架构支持多种Hypervisor
 
接着的问题是:现在市面上那么多 Hypervisor ,nova-compute 如何与他们配合呢?
 
 
这就是我们之前讨论的 Driver 架构。nova-compute 为这些 Hypervisor 定义了统一的接口,Hypervisor 只需要实现这些接口,就可以以 Driver 的形式即插即用到OpenStack 系统中。下面是Nova Driver 的架构示意图:
 
 
我们可以在 /opt/stack/nova/nova/virt/目录下查看到OpenStack 源代码中已经自带了上面这几个Hypervisor的 Driver
 
stack@DevStack-Controller:~$ ll /opt/stack/nova/nova/virt/ | grep '^d'
drwxr-xr-x  9 stack stack     4096 May 22 01:12 ./
drwxr-xr-x 32 stack stack     4096 May 22 01:12 ../
drwxr-xr-x  4 stack stack     4096 May 22 01:12 disk/
drwxr-xr-x  2 stack stack     4096 May 22 00:55 hyperv/
drwxr-xr-x  2 stack stack     4096 May 22 01:12 image/
drwxr-xr-x  2 stack stack     4096 May 22 00:55 ironic/
drwxr-xr-x  4 stack stack     4096 May 22 01:12 libvirt/
drwxr-xr-x  2 stack stack     4096 May 22 00:55 vmwareapi/
drwxr-xr-x  3 stack stack     4096 May 22 00:55 xenapi/
 
某个特定的计算节点上只会运行一种 Hypervisor ,只需要在该节点 nova-compute 的配置文件 中指定 compute_driver  即可,因为我们是用的是KVM,所以配置的是 libvirt 的driver
 
stack@DevStack-Controller:~$ cat /etc/nova/nova.conf | grep compute_driver
compute_driver = libvirt.LibvirtDriver
 
nova-compute 的功能可以分为两类:
 
    1、定时向 OpenStack 报告计算节点的状态
    2、实现 instance 生命周期的管理
 
定期向 OpenStack 报告计算节点的状态
 
前面我们看到 nova-scheduler 的很多 Filter 是根据计算节点的资源使用情况进行过滤的。比如 RamFilter 要检查计算节点当前可用的内存量。CoreFilter 检查可用的 vCPU 数量。DiskFilter 则会检查可用的磁盘空间。
 
那这里有个问题:OpenStack 是如何得知每个计算节点的这些信息呢?
 
答案就是:nova-compute 会定期向OpenStack 报告。从nova-compute 的日志 /opt/stacklogs/n-cpu.log 可以发现,每个一段时间,nova-compute 就会报告当前计算节点的资源使用情况 和 nova-compute 的服务状态
 
2019-05-23 15:44:30.875 DEBUG nova.compute.resource_tracker [req-aceadd16-b754-4d7d-a373-504d946ed357 None None] Hypervisor/Node resource view: name=DevStack-Compute free_ram=15469MB free_disk=152GB free_vcpus=8
 
如果我们再深入问一个问题:nova-compute 是如何获得当前计算节点的资源使用信息呢?
 
要得到计算节点的资源使用详细情况,需要知道当前节点上所有 instance 的资源占用信息。nova-compute 通过 Hypervisor 的 driver 拿到这些信息。
 
比如我们实验环境使用的Hypervisor 是KVM,用的 driver 是 LibvirtDriver 。LibvirtDriver 可以调用相关的API 获得资源信息,这些 API 的作用相当于我们在CLI里执行 virsh nodeinfo 、virsh dominfo 等命令。
 
实现 instance 生命周期的管理
 
OpenStack 对 instance 最主要的操作都是通过 nova-compute 实现的,包括 instance 的launch 、 shutdown、reboot、suspend 、 resume、terminate、resize、migration 、 snapshot等
 
本小节重点学习 nova-compute 如何实现 instance launch (部署) 操作,其他操作将会在后的章节讨论。
 
当 nova-scheduler 选定了部署 instance 的计算节点后,会通过消息中间件 RabbitMQ 向选定的计算节点发出 launch instance 的命令。该计算节点上运行的 nova-compute 收到消息后会执行 instance 创建操作。日志 /opt/stack/logs/n-cpu.log 记录了整个操作过程
 
nova-compute创建instance 的过程可以分为4步:
 
    1、为instance 准备资源
    2、创建instance的镜像文件
    3、创建 instance 的XML 定义文件
    4、创建虚拟网络并启动虚拟机
 
 
下面是instance admin-test03 的详细信息和创建过程
 
 
2019-05-23 16:35:16.413 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "a0e2b485-f40c-43e4-beb6-049b6399f0ec" acquired by "nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2019-05-23 16:35:16.430 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Starting instance... from (pid=4613) _do_build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1766
2019-05-23 16:35:16.528 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Attempting claim: memory 256 MB, disk 0 GB, vcpus 1 CPU    #    在系统中分配资源(mem、cpu、disk)
2019-05-23 16:35:16.528 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Total memory: 16046 MB, used: 1024.00 MB    #    分配mem
2019-05-23 16:35:16.529 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] memory limit: 24069.00 MB, free: 23045.00 MB
2019-05-23 16:35:16.529 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Total disk: 155 GB, used: 0.00 GB    #    分配disk
2019-05-23 16:35:16.529 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] disk limit: 155.00 GB, free: 155.00 GB
2019-05-23 16:35:16.530 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Total vcpu: 8 VCPU, used: 2.00 VCPU    #    分配cpu
2019-05-23 16:35:16.530 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] vcpu limit not specified, defaulting to unlimited
2019-05-23 16:35:16.531 INFO nova.compute.claims [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Claim successful    #    分配成功
2019-05-23 16:35:16.669 DEBUG nova.scheduler.client.report [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Sending allocation for instance {'MEMORY_MB': 256, 'VCPU': 1} from (pid=4613) _allocate_for_instance /opt/stack/nova/nova/scheduler/client/report.py:664
2019-05-23 16:35:16.722 INFO nova.scheduler.client.report [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Submitted allocation for instance
2019-05-23 16:35:16.861 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Start building networks asynchronously for instance. from (pid=4613) _build_resources /opt/stack/nova/nova/compute/manager.py:2083
2019-05-23 16:35:16.963 WARNING nova.virt.libvirt.driver [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
2019-05-23 16:35:16.968 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Allocating IP information in the background. from (pid=4613) _allocate_network_async /opt/stack/nova/nova/compute/manager.py:1398
2019-05-23 16:35:16.969 DEBUG nova.network.neutronv2.api [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] allocate_for_instance() from (pid=4613) allocate_for_instance /opt/stack/nova/nova/network/neutronv2/api.py:840
2019-05-23 16:35:16.997 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Start building block device mappings for instance. from (pid=4613) _build_resources /opt/stack/nova/nova/compute/manager.py:2109
2019-05-23 16:35:17.086 INFO nova.virt.block_device [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Booting with blank volume at /dev/vda
2019-05-23 16:35:17.919 DEBUG nova.network.neutronv2.api [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Successfully created port: ef41d30d-862f-4342-919a-95ed7a0587e3 from (pid=4613) _create_port_minimal /opt/stack/nova/nova/network/neutronv2/api.py:407
2019-05-23 16:35:18.922 DEBUG nova.network.neutronv2.api [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Successfully updated port: ef41d30d-862f-4342-919a-95ed7a0587e3 from (pid=4613) _update_port /opt/stack/nova/nova/network/neutronv2/api.py:444
2019-05-23 16:35:18.964 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Acquired semaphore "refresh_cache-a0e2b485-f40c-43e4-beb6-049b6399f0ec" from (pid=4613) lock /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
2019-05-23 16:35:18.965 DEBUG nova.network.neutronv2.api [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] _get_instance_nw_info() from (pid=4613) _get_instance_nw_info /opt/stack/nova/nova/network/neutronv2/api.py:1295
2019-05-23 16:35:19.054 DEBUG nova.compute.manager [req-d3b1fdc5-9b3b-4d58-9a28-56aec0caf773 service nova] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Received event network-changed from (pid=4613) external_instance_event /opt/stack/nova/nova/compute/manager.py:6900
2019-05-23 16:35:19.059 DEBUG neutronclient.v2_0.client [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] GET call to neutron for http://10.12.31.241:9696/v2.0/ports.json?tenant_id=c2b9e5f4a15d43218f3fca6e13c49a3a&device_id=a0e2b485-f40c-43e4-beb6-049b6399f0ec used request id req-02170a5f-ad8c-46bd-a0d7-a87a0369f578 from (pid=4613) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
2019-05-23 16:35:19.060 DEBUG nova.network.neutronv2.api [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Instance cache missing network info. from (pid=4613) _get_preexisting_port_ids /opt/stack/nova/nova/network/neutronv2/api.py:2194
2019-05-23 16:35:19.285 DEBUG cinderclient.v2.client [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] REQ: curl -g -i -X POST http://10.12.31.241:8776/v2/c2b9e5f4a15d43218f3fca6e13c49a3a/volumes/2ba40932-cabc-40b1-9011-87354ac29fc1/action -H "User-Agent: python-cinderclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}b6ab852c24e900a76c48a07b9d95e71bebefd129" -d '{"os-attach": {"instance_uuid": "a0e2b485-f40c-43e4-beb6-049b6399f0ec", "mountpoint": "/dev/vda", "mode": "rw"}}' from (pid=4613) _http_log_request /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:347
2019-05-23 16:35:19.476 DEBUG nova.network.base_api [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Updating instance_info_cache with network_info: [{"profile": {}, "ovs_interfaceid": null, "preserve_on_delete": false, "network": {"bridge": "brq32740d6a-81", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::3"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "c2b9e5f4a15d43218f3fca6e13c49a3a", "should_create_bridge": true, "mtu": 1500}, "id": "32740d6a-8119-4c8e-9828-fe5da5b6e7ac", "label": "public"}, "devname": "tapef41d30d-86", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true}, "address": "fa:16:3e:7c:d6:21", "active": false, "type": "bridge", "id": "ef41d30d-862f-4342-919a-95ed7a0587e3", "qbg_params": null}] from (pid=4613) update_instance_cache_with_nw_info /opt/stack/nova/nova/network/base_api.py:48
2019-05-23 16:35:19.529 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Releasing semaphore "refresh_cache-a0e2b485-f40c-43e4-beb6-049b6399f0ec" from (pid=4613) lock /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
2019-05-23 16:35:19.529 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Instance network_info: |[{"profile": {}, "ovs_interfaceid": null, "preserve_on_delete": false, "network": {"bridge": "brq32740d6a-81", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::3"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "c2b9e5f4a15d43218f3fca6e13c49a3a", "should_create_bridge": true, "mtu": 1500}, "id": "32740d6a-8119-4c8e-9828-fe5da5b6e7ac", "label": "public"}, "devname": "tapef41d30d-86", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true}, "address": "fa:16:3e:7c:d6:21", "active": false, "type": "bridge", "id": "ef41d30d-862f-4342-919a-95ed7a0587e3", "qbg_params": null}]| from (pid=4613) _allocate_network_async /opt/stack/nova/nova/compute/manager.py:1413
2019-05-23 16:35:19.531 DEBUG oslo_concurrency.lockutils [req-d3b1fdc5-9b3b-4d58-9a28-56aec0caf773 service nova] Acquired semaphore "refresh_cache-a0e2b485-f40c-43e4-beb6-049b6399f0ec" from (pid=4613) lock /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
2019-05-23 16:35:19.531 DEBUG nova.network.neutronv2.api [req-d3b1fdc5-9b3b-4d58-9a28-56aec0caf773 service nova] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] _get_instance_nw_info() from (pid=4613) _get_instance_nw_info /opt/stack/nova/nova/network/neutronv2/api.py:1295
2019-05-23 16:35:19.621 DEBUG neutronclient.v2_0.client [req-d3b1fdc5-9b3b-4d58-9a28-56aec0caf773 service nova] GET call to neutron for http://10.12.31.241:9696/v2.0/ports.json?tenant_id=c2b9e5f4a15d43218f3fca6e13c49a3a&device_id=a0e2b485-f40c-43e4-beb6-049b6399f0ec used request id req-4974104b-dd47-4e3c-a361-5af0216b77a9 from (pid=4613) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
2019-05-23 16:35:19.742 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Start spawning the instance on the hypervisor. from (pid=4613) _build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1942
2019-05-23 16:35:19.743 INFO nova.virt.libvirt.driver [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Creating image
2019-05-23 16:35:20.017 DEBUG nova.network.base_api [req-d3b1fdc5-9b3b-4d58-9a28-56aec0caf773 service nova] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Updating instance_info_cache with network_info: [{"profile": {}, "ovs_interfaceid": null, "preserve_on_delete": false, "network": {"bridge": "brq32740d6a-81", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::3"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "c2b9e5f4a15d43218f3fca6e13c49a3a", "should_create_bridge": true, "mtu": 1500}, "id": "32740d6a-8119-4c8e-9828-fe5da5b6e7ac", "label": "public"}, "devname": "tapef41d30d-86", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true}, "address": "fa:16:3e:7c:d6:21", "active": false, "type": "bridge", "id": "ef41d30d-862f-4342-919a-95ed7a0587e3", "qbg_params": null}] from (pid=4613) update_instance_cache_with_nw_info /opt/stack/nova/nova/network/base_api.py:48
2019-05-23 16:35:20.028 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Running cmd (subprocess): cp -r /opt/stack/data/nova/instances/_base/013ffe0108c53d3e6a35423faf2481a9302c34aa /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/kernel from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
2019-05-23 16:35:20.038 DEBUG oslo_concurrency.lockutils [req-d3b1fdc5-9b3b-4d58-9a28-56aec0caf773 service nova] Releasing semaphore "refresh_cache-a0e2b485-f40c-43e4-beb6-049b6399f0ec" from (pid=4613) lock /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
2019-05-23 16:35:20.040 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] CMD "cp -r /opt/stack/data/nova/instances/_base/013ffe0108c53d3e6a35423faf2481a9302c34aa /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/kernel" returned: 0 in 0.012s from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2019-05-23 16:35:20.041 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/kernel from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
2019-05-23 16:35:20.092 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/kernel" returned: 0 in 0.051s from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2019-05-23 16:35:20.093 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "/opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/disk.info" acquired by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: waited 0.000s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2019-05-23 16:35:20.098 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "/opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/disk.info" released by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: held 0.005s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2019-05-23 16:35:20.355 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Running cmd (subprocess): cp -r /opt/stack/data/nova/instances/_base/aee1333f1c105e7dafde694cd2bb73d547b6598d /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/ramdisk from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
2019-05-23 16:35:20.367 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] CMD "cp -r /opt/stack/data/nova/instances/_base/aee1333f1c105e7dafde694cd2bb73d547b6598d /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/ramdisk" returned: 0 in 0.011s from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2019-05-23 16:35:20.368 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/ramdisk from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
2019-05-23 16:35:20.420 DEBUG oslo_concurrency.processutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/ramdisk" returned: 0 in 0.052s from (pid=4613) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2019-05-23 16:35:20.421 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "/opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/disk.info" acquired by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: waited 0.000s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2019-05-23 16:35:20.422 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "/opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/disk.info" released by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: held 0.001s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2019-05-23 16:35:20.422 DEBUG nova.virt.libvirt.driver [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Ensure instance console log exists: /opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/console.log from (pid=4613) _ensure_console_log_for_instance /opt/stack/nova/nova/virt/libvirt/driver.py:3061
2019-05-23 16:35:20.425 DEBUG nova.virt.libvirt.driver [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Start _get_guest_xml network_info=[{"profile": {}, "ovs_interfaceid": null, "preserve_on_delete": false, "network": {"bridge": "brq32740d6a-81", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::3"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "c2b9e5f4a15d43218f3fca6e13c49a3a", "should_create_bridge": true, "mtu": 1500}, "id": "32740d6a-8119-4c8e-9828-fe5da5b6e7ac", "label": "public"}, "devname": "tapef41d30d-86", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true}, "address": "fa:16:3e:7c:d6:21", "active": false, "type": "bridge", "id": "ef41d30d-862f-4342-919a-95ed7a0587e3", "qbg_params": null}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'ide', 'mapping': {u'/dev/vda': {'bus': u'virtio', 'boot_index': '1', 'type': u'disk', 'dev': u'vda'}, 'root': {'bus': u'virtio', 'boot_index': '1', 'type': u'disk', 'dev': u'vda'}}} image_meta=ImageMeta(checksum='eb9139e4942121f22bbc2afc0400b2a4',container_format='ami',created_at=2019-05-21T17:11:20Z,direct_url=<?>,disk_format='ami',id=7c5fbab9-c215-47db-9848-66ca5305f0ac,min_disk=0,min_ram=0,name='cirros-0.3.4-x86_64-uec',owner='c2b9e5f4a15d43218f3fca6e13c49a3a',properties=ImageMetaProps,protected=<?>,size=25165824,status='active',tags=<?>,updated_at=2019-05-21T17:11:20Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'swap': None, 'root_device_name': u'/dev/vda', 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 'mount_device': u'/dev/vda', 'connection_info': {u'driver_volume_type': u'iscsi', 'connector': {'platform': 'x86_64', 'host': 'DevStack-Controller', 'do_local_attach': False, 'ip': '10.12.31.241', 'os_type': 'linux2', 'multipath': False, 'initiator': u'iqn.1993-08.org.debian:01:1997f5bacda'}, 'serial': u'2ba40932-cabc-40b1-9011-87354ac29fc1', u'data': {u'access_mode': u'rw', u'target_discovered': False, u'encrypted': False, u'qos_specs': None, u'target_iqn': u'iqn.2010-10.org.openstack:volume-2ba40932-cabc-40b1-9011-87354ac29fc1', u'target_portal': u'10.12.31.241:3260', u'volume_id': u'2ba40932-cabc-40b1-9011-87354ac29fc1', u'target_lun': 1, u'auth_password': u'***', u'auth_username': u'Nto8maSrg6QoWcwQ6bQ7', u'auth_method': u'CHAP'}}, 'disk_bus': u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]} from (pid=4613) _get_guest_xml /opt/stack/nova/nova/virt/libvirt/driver.py:5062
2019-05-23 16:35:21.882 DEBUG nova.virt.libvirt.vif [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] vif_type=bridge instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2019-05-23T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='admin-test03',display_name='admin-test03',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),host='DevStack-Controller',hostname='admin-test03',id=3,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='c3f9bfb6-f089-4a0a-b410-e128284761f8',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='DevStack-Controller',locked=False,locked_by=None,memory_mb=256,metadata={},migration_context=<?>,new_flavor=None,node='DevStack-Controller',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2b9e5f4a15d43218f3fca6e13c49a3a',ramdisk_id='16b087bd-8aa5-48fa-968b-6d8986ee2434',reservation_id='r-m4xc1dvw',root_device_name='/dev/vda',root_gb=0,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin',image_base_image_ref='',image_container_format='ami',image_disk_format='ami',image_kernel_id='c3f9bfb6-f089-4a0a-b410-e128284761f8',image_min_disk='0',image_min_ram='0',image_ramdisk_id='16b087bd-8aa5-48fa-968b-6d8986ee2434',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='spawning',terminated_at=None,updated_at=2019-05-23T08:35:17Z,user_data=None,user_id='c23652fbcaa74c1e8becc960e2210820',uuid=a0e2b485-f40c-43e4-beb6-049b6399f0ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"profile": {}, "ovs_interfaceid": null, "preserve_on_delete": false, "network": {"bridge": "brq32740d6a-81", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::3"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "c2b9e5f4a15d43218f3fca6e13c49a3a", "should_create_bridge": true, "mtu": 1500}, "id": "32740d6a-8119-4c8e-9828-fe5da5b6e7ac", "label": "public"}, "devname": "tapef41d30d-86", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true}, "address": "fa:16:3e:7c:d6:21", "active": false, "type": "bridge", "id": "ef41d30d-862f-4342-919a-95ed7a0587e3", "qbg_params": null} virt_type=kvm from (pid=4613) get_config /opt/stack/nova/nova/virt/libvirt/vif.py:529
2019-05-23 16:35:21.891 DEBUG nova.objects.instance [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lazy-loading 'pci_devices' on Instance uuid a0e2b485-f40c-43e4-beb6-049b6399f0ec from (pid=4613) obj_load_attr /opt/stack/nova/nova/objects/instance.py:1058
2019-05-23 16:35:21.902 DEBUG nova.virt.libvirt.driver [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] End _get_guest_xml xml=<domain type="kvm">
  <uuid>a0e2b485-f40c-43e4-beb6-049b6399f0ec</uuid>
  <name>instance-00000003</name>
  <memory>262144</memory>
  <vcpu>1</vcpu>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="15.1.6"/>
      <nova:name>admin-test03</nova:name>
      <nova:creationTime>2019-05-23 08:35:20</nova:creationTime>
      <nova:flavor name="cirros256">
        <nova:memory>256</nova:memory>
        <nova:disk>0</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>1</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="c23652fbcaa74c1e8becc960e2210820">admin</nova:user>
        <nova:project uuid="c2b9e5f4a15d43218f3fca6e13c49a3a">admin</nova:project>
      </nova:owner>
    </nova:instance>
  </metadata>
  <sysinfo type="smbios">
    <system>
      <entry name="manufacturer">OpenStack Foundation</entry>
      <entry name="product">OpenStack Nova</entry>
      <entry name="version">15.1.6</entry>
      <entry name="serial">89b90ae8-bc53-f0dd-6f7c-57a35ce3f8ab</entry>
      <entry name="uuid">a0e2b485-f40c-43e4-beb6-049b6399f0ec</entry>
      <entry name="family">Virtual Machine</entry>
    </system>
  </sysinfo>
  <os>
    <type>hvm</type>
    <kernel>/opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/kernel</kernel>
    <initrd>/opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/ramdisk</initrd>
    <cmdline>root=/dev/vda console=tty0 console=ttyS0</cmdline>
    <smbios mode="sysinfo"/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cputune>
    <shares>1024</shares>
  </cputune>
  <clock offset="utc">
    <timer name="pit" tickpolicy="delay"/>
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="hpet" present="no"/>
  </clock>
  <cpu match="exact">
    <topology sockets="1" cores="1" threads="1"/>
  </cpu>
  <devices>
    <disk type="block" device="disk">
      <driver name="qemu" type="raw" cache="none" io="native"/>
      <source dev="/dev/disk/by-path/ip-10.12.31.241:3260-iscsi-iqn.2010-10.org.openstack:volume-2ba40932-cabc-40b1-9011-87354ac29fc1-lun-1"/>
      <target bus="virtio" dev="vda"/>
      <serial>2ba40932-cabc-40b1-9011-87354ac29fc1</serial>
    </disk>
    <interface type="bridge">
      <mac address="fa:16:3e:7c:d6:21"/>
      <model type="virtio"/>
      <source bridge="brq32740d6a-81"/>
      <target dev="tapef41d30d-86"/>
    </interface>
    <serial type="pty">
      <log file="/opt/stack/data/nova/instances/a0e2b485-f40c-43e4-beb6-049b6399f0ec/console.log" append="off"/>
    </serial>
    <graphics type="vnc" autoport="yes" keymap="en-us" listen="127.0.0.1"/>
    <video>
      <model type="cirrus"/>
    </video>
    <memballoon model="virtio">
      <stats period="10"/>
    </memballoon>
  </devices>
</domain>
2019-05-23 16:35:21.902 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Preparing to wait for external event network-vif-plugged-ef41d30d-862f-4342-919a-95ed7a0587e3 from (pid=4613) prepare_for_instance_event /opt/stack/nova/nova/compute/manager.py:328
2019-05-23 16:35:21.902 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "a0e2b485-f40c-43e4-beb6-049b6399f0ec-events" acquired by "nova.compute.manager._create_or_get_event" :: waited 0.000s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2019-05-23 16:35:21.903 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "a0e2b485-f40c-43e4-beb6-049b6399f0ec-events" released by "nova.compute.manager._create_or_get_event" :: held 0.000s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2019-05-23 16:35:21.903 DEBUG nova.virt.libvirt.vif [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] vif_type=bridge instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2019-05-23T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='admin-test03',display_name='admin-test03',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),host='DevStack-Controller',hostname='admin-test03',id=3,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='c3f9bfb6-f089-4a0a-b410-e128284761f8',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='DevStack-Controller',locked=False,locked_by=None,memory_mb=256,metadata={},migration_context=<?>,new_flavor=None,node='DevStack-Controller',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2b9e5f4a15d43218f3fca6e13c49a3a',ramdisk_id='16b087bd-8aa5-48fa-968b-6d8986ee2434',reservation_id='r-m4xc1dvw',root_device_name='/dev/vda',root_gb=0,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin',image_base_image_ref='',image_container_format='ami',image_disk_format='ami',image_kernel_id='c3f9bfb6-f089-4a0a-b410-e128284761f8',image_min_disk='0',image_min_ram='0',image_ramdisk_id='16b087bd-8aa5-48fa-968b-6d8986ee2434',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='spawning',terminated_at=None,updated_at=2019-05-23T08:35:17Z,user_data=None,user_id='c23652fbcaa74c1e8becc960e2210820',uuid=a0e2b485-f40c-43e4-beb6-049b6399f0ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"profile": {}, "ovs_interfaceid": null, "preserve_on_delete": false, "network": {"bridge": "brq32740d6a-81", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::3"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "c2b9e5f4a15d43218f3fca6e13c49a3a", "should_create_bridge": true, "mtu": 1500}, "id": "32740d6a-8119-4c8e-9828-fe5da5b6e7ac", "label": "public"}, "devname": "tapef41d30d-86", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true}, "address": "fa:16:3e:7c:d6:21", "active": false, "type": "bridge", "id": "ef41d30d-862f-4342-919a-95ed7a0587e3", "qbg_params": null} from (pid=4613) plug /opt/stack/nova/nova/virt/libvirt/vif.py:776
2019-05-23 16:35:23.274 DEBUG nova.compute.manager [req-35ca125d-7c8a-47f3-800e-c5aa180488b1 service nova] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Received event network-vif-plugged-ef41d30d-862f-4342-919a-95ed7a0587e3 from (pid=4613) external_instance_event /opt/stack/nova/nova/compute/manager.py:6900
2019-05-23 16:35:23.275 DEBUG oslo_concurrency.lockutils [req-35ca125d-7c8a-47f3-800e-c5aa180488b1 service nova] Lock "a0e2b485-f40c-43e4-beb6-049b6399f0ec-events" acquired by "nova.compute.manager._pop_event" :: waited 0.000s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2019-05-23 16:35:23.276 DEBUG oslo_concurrency.lockutils [req-35ca125d-7c8a-47f3-800e-c5aa180488b1 service nova] Lock "a0e2b485-f40c-43e4-beb6-049b6399f0ec-events" released by "nova.compute.manager._pop_event" :: held 0.001s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2019-05-23 16:35:23.276 DEBUG nova.compute.manager [req-35ca125d-7c8a-47f3-800e-c5aa180488b1 service nova] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Processing event network-vif-plugged-ef41d30d-862f-4342-919a-95ed7a0587e3 from (pid=4613) _process_instance_event /opt/stack/nova/nova/compute/manager.py:6855
2019-05-23 16:35:24.054 DEBUG nova.virt.driver [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] Emitting event <LifecycleEvent: 1558600524.05, a0e2b485-f40c-43e4-beb6-049b6399f0ec => Started> from (pid=4613) emit_event /opt/stack/nova/nova/virt/driver.py:1444
2019-05-23 16:35:24.055 INFO nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] VM Started (Lifecycle Event)
2019-05-23 16:35:24.061 DEBUG nova.virt.libvirt.driver [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Instance is running from (pid=4613) spawn /opt/stack/nova/nova/virt/libvirt/driver.py:2799
2019-05-23 16:35:24.064 INFO nova.virt.libvirt.driver [-] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Instance spawned successfully.
2019-05-23 16:35:24.065 INFO nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Took 4.32 seconds to spawn the instance on the hypervisor.
2019-05-23 16:35:24.065 DEBUG nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Checking state from (pid=4613) _get_power_state /opt/stack/nova/nova/compute/manager.py:1184
2019-05-23 16:35:24.094 DEBUG nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Checking state from (pid=4613) _get_power_state /opt/stack/nova/nova/compute/manager.py:1184
2019-05-23 16:35:24.102 DEBUG nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 from (pid=4613) handle_lifecycle_event /opt/stack/nova/nova/compute/manager.py:1096
2019-05-23 16:35:24.142 INFO nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] During sync_power_state the instance has a pending task (spawning). Skip.
2019-05-23 16:35:24.142 DEBUG nova.virt.driver [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] Emitting event <LifecycleEvent: 1558600524.06, a0e2b485-f40c-43e4-beb6-049b6399f0ec => Paused> from (pid=4613) emit_event /opt/stack/nova/nova/virt/driver.py:1444
2019-05-23 16:35:24.143 INFO nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] VM Paused (Lifecycle Event)
2019-05-23 16:35:24.195 DEBUG nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Checking state from (pid=4613) _get_power_state /opt/stack/nova/nova/compute/manager.py:1184
2019-05-23 16:35:24.199 DEBUG nova.virt.driver [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] Emitting event <LifecycleEvent: 1558600524.06, a0e2b485-f40c-43e4-beb6-049b6399f0ec => Resumed> from (pid=4613) emit_event /opt/stack/nova/nova/virt/driver.py:1444
2019-05-23 16:35:24.200 INFO nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] VM Resumed (Lifecycle Event)
2019-05-23 16:35:24.227 INFO nova.compute.manager [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Took 7.72 seconds to build instance.
2019-05-23 16:35:24.239 DEBUG nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Checking state from (pid=4613) _get_power_state /opt/stack/nova/nova/compute/manager.py:1184
2019-05-23 16:35:24.243 DEBUG nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 from (pid=4613) handle_lifecycle_event /opt/stack/nova/nova/compute/manager.py:1096
2019-05-23 16:35:24.248 DEBUG oslo_concurrency.lockutils [req-808daefa-2cd8-4c34-bd34-2730453805da admin admin] Lock "a0e2b485-f40c-43e4-beb6-049b6399f0ec" released by "nova.compute.manager._locked_do_build_and_run_instance" :: held 7.834s from (pid=4613) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2019-05-23 16:35:24.281 DEBUG nova.virt.driver [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] Emitting event <LifecycleEvent: 1558600524.06, a0e2b485-f40c-43e4-beb6-049b6399f0ec => Resumed> from (pid=4613) emit_event /opt/stack/nova/nova/virt/driver.py:1444
2019-05-23 16:35:24.282 INFO nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] VM Resumed (Lifecycle Event)
2019-05-23 16:35:24.320 DEBUG nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Checking state from (pid=4613) _get_power_state /opt/stack/nova/nova/compute/manager.py:1184
2019-05-23 16:35:24.323 DEBUG nova.compute.manager [req-716d3f56-c88c-4ae8-ae98-166a6848a639 None None] [instance: a0e2b485-f40c-43e4-beb6-049b6399f0ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 from (pid=4613) handle_lifecycle_event /opt/stack/nova/nova/compute/manager.py:1096
 
①为instance准备资源
 
nova-compute 首选会根据指定的flavor 依次为instance分配内存、磁盘、vCPU 、网络
 
②创建instance的镜像文件
 
资源准备好后,nova-Compute 会为instance 创建镜像文件。OpenStack 启动一个instance 时,会选择一个image ,这个image 由Glance 管理。nova-compute 会:
 
    1、首先将该image 下载 到计算节点
    2、然后将其作为backing file 创建 instance 的镜像文件
 
从Glance 下载image ,nova-compute 首先会检查image是否已经下载(比如之前已经创建过相同image的instance)。如果没有,就从Glance 下载image 到本地。由此开机按,如果计算节点上要运行多个相同的image的instance,只会在启动第一个instance的时候从 Glance 下载 image,后面的instance 启动速度会大大加快。
 
nova配置文件中定义了  instances_path = /opt/stack/data/nova/instances  虚机的磁盘文件、日志文件等都存放在这里
 
root@DevStack-Controller:/opt/stack/data/nova/instances# ll
total 32
drwxr-xr-x 7 stack root     4096 May 23 16:35 ./
drwxr-xr-x 4 stack root     4096 May 23 16:21 ../
drwxr-xr-x 2 stack libvirtd 4096 May 23 16:21 60958b71-e535-4241-a2a8-bf59c3e36abe/
drwxr-xr-x 2 stack libvirtd 4096 May 23 16:30 7b56d1e5-235e-4b95-a2fe-74017f744042/
drwxr-xr-x 2 stack libvirtd 4096 May 23 16:35 a0e2b485-f40c-43e4-beb6-049b6399f0ec/
drwxr-xr-x 2 stack libvirtd 4096 May 23 16:21 _base/
-rw-r--r-- 1 stack libvirtd   42 May 23 20:18 compute_nodes
drwxr-xr-x 2 stack libvirtd 4096 May 23 16:21 locks/
root@DevStack-Controller:/opt/stack/data/nova/instances# ll -h a0e2b485-f40c-43e4-beb6-049b6399f0ec/
total 8.4M
drwxr-xr-x 2 stack        libvirtd 4.0K May 23 16:35 ./
drwxr-xr-x 7 stack        root     4.0K May 23 16:35 ../
-rw------- 1 root         root      23K May 23 16:39 console.log
-rw-r--r-- 1 stack        libvirtd  172 May 23 16:35 disk.info
-rw-r--r-- 1 libvirt-qemu kvm      4.8M May 23 16:35 kernel
-rw-r--r-- 1 libvirt-qemu kvm      3.6M May 23 16:35 ramdisk
 
这里有两个容易混淆的术语:
 
    1、image ,指的是 Glance 上保存的镜像,作为instance 运行的模板,计算节点在需要的时候会将image下载到本地
    2、镜像文件,指的是instance 启动盘对应的文件
    3、英文中两者都叫 image ,为了避免混淆,我们用image 和 镜像文件区分
 
③创建instance的XML定义文件
 
④创建虚拟网络并启动 instance 
 
instance启动后会定期检查更新instance的状态,最后我们在命令行和web ui中都可以看到创建好的instance
 
 

下面的问题未解决
 
部署的OpenStack无法创建instance,在instance详情中可以看到 “Host 'DevStack-Compute' is not mapped to any cell”
 
 
去DevStack-Controller节点上执行如下命令解决
 
stack@DevStack-Controller:~$ nova-manage cell_v2 discover_hosts
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166: Warning: (1287, u"'@@tx_isolation' is deprecated and will be removed in a future release. Please use '@@transaction_isolation' instead")
  result = self._query(query)
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166: Warning: (3090, u"Changing sql mode 'NO_AUTO_CREATE_USER' is deprecated. It will be removed in a future release.")
  result = self._query(query)
stack@DevStack-Controller:~$ nova-manage cell_v2 discover_hosts --verbose
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166: Warning: (1287, u"'@@tx_isolation' is deprecated and will be removed in a future release. Please use '@@transaction_isolation' instead")
  result = self._query(query)
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166: Warning: (3090, u"Changing sql mode 'NO_AUTO_CREATE_USER' is deprecated. It will be removed in a future release.")
  result = self._query(query)
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell: 5fd8cdfc-e20e-46da-8c1a-88e1c5ce2790
Found 2 computes in cell: 5fd8cdfc-e20e-46da-8c1a-88e1c5ce2790
Checking host mapping for compute host 'DevStack-Controller': 3ead96c5-1460-4573-8154-15b6727c1178
Checking host mapping for compute host 'DevStack-Compute': 00572519-253c-4d4e-843b-c4baceaf0a3b
 
执行完以上命令后 可以创建instance了,但是只能创建到 DevStack-Controller 上,禁用 DevStack-Controller 的nova后,再创建instance报错
 
 
然后去 DevStack-Compute上执行命令
 
stack@DevStack-Compute:~$ source devstack/openrc admin admin
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
stack@DevStack-Compute:~$ nova-manage cell_v2 discover_hosts --verbose
An error has occurred:
Traceback (most recent call last):
  File "/opt/stack/nova/nova/cmd/manage.py", line 1682, in main
    ret = fn(*fn_args, **fn_kwargs)
  File "/opt/stack/nova/nova/cmd/manage.py", line 1416, in discover_hosts
    hosts = host_mapping_obj.discover_hosts(ctxt, cell_uuid, status_fn)
  File "/opt/stack/nova/nova/objects/host_mapping.py", line 184, in discover_hosts
    cell_mappings = objects.CellMappingList.get_all(ctxt)
  File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 184, in wrapper
    result = fn(cls, context, *args, **kwargs)
  File "/opt/stack/nova/nova/objects/cell_mapping.py", line 127, in get_all
    db_mappings = cls._get_all_from_db(context)
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 893, in wrapper
    with self._transaction_scope(context):
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 944, in _transaction_scope
    allow_async=self._allow_async) as resource:
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 558, in _session
    bind=self.connection, mode=self.mode)
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 317, in _create_session
    self._start()
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 403, in _start
    engine_args, maker_args)
  File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 425, in _setup_for_connection
    "No sql_connection parameter is established")
CantStartEngineError: No sql_connection parameter is established
 
 
 

O028、nova-compute 部署 instance 详解的更多相关文章

  1. nova-compute 部署 instance 详解 - 每天5分钟玩转 OpenStack(28)

    本节讨论 nova-compute,并详细分析 instance 部署的全过程. 先给大家道个歉:今天这篇文章的篇幅比以往要多一些,本来想分两次发,但考虑到文章的完整和系统性,还是一次发了出来,这次可 ...

  2. t持久化与集群部署开发详解

    Quartz.net持久化与集群部署开发详解 序言 我前边有几篇文章有介绍过quartz的基本使用语法与类库.但是他的执行计划都是被写在本地的xml文件中.无法做集群部署,我让它看起来脆弱不堪,那是我 ...

  3. centos7.2环境nginx+mysql+php-fpm+svn配置walle自动化部署系统详解

    centos7.2环境nginx+mysql+php-fpm+svn配置walle自动化部署系统详解 操作系统:centos 7.2 x86_64 安装walle系统服务端 1.以下安装,均在宿主机( ...

  4. Kubernetes 部署策略详解-转载学习

    Kubernetes 部署策略详解 参考:https://www.qikqiak.com/post/k8s-deployment-strategies/ 在Kubernetes中有几种不同的方式发布应 ...

  5. Nova Suspend/Rescue 操作详解 - 每天5分钟玩转 OpenStack(35)

    本节我们讨论 Suspend/Resume 和 Rescue/Unrescue 这两组操作. Suspend/Resume 有时需要长时间暂停 instance,可以通过 Suspend 操作将 in ...

  6. Saltstack的部署及其详解

    https://repo.saltstack.com/ Saltstack简介: salt是一个多平台基础设施管理工具通常只用在linux上,使用那个轻量级的通讯器,ZN用python写成的批量管理工 ...

  7. OpenVPN CentOS7 安装部署配置详解

    一 .概念相关 1.vpn 介绍 vpn 虚拟专用网络,是依靠isp和其他的nsp,在公共网络中建立专用的数据通信网络的技术.在vpn中任意两点之间的链接并没有传统的专网所需的端到端的物理链路,而是利 ...

  8. 2、Redis 底层原理:Cluster 集群部署与详解

    Redis 简介 Redis 提供数据缓存服务,内部数据都存在内存中,所以访问速度非常快. 早期,Redis 单应用服务亦能满足企业的需求.之后,业务量的上升,单机的读写能力满足不了业务的需求,技术上 ...

  9. OpenStack 部署步骤详解(mitaka/ocata/一键部署)

    正文 OpenStack作为一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,开放源代码项目的云计算管理平台项目.具体知识我会在后面文章中做出介绍,本章主要按步骤给大家演示在C ...

随机推荐

  1. Django admin site应用

    django自带的admin后台管理,可以实现对数据库表的增删改查,用起来十分方便.其使用和配置主要分为三个步骤: 1,创建超级用户 需要创建超级用户来登陆admin后台系统,在命令行中输入 pyth ...

  2. ubuntu下如何安装linaro工具链?

    1. 获取工具链 从此处获取,如: wget https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64 ...

  3. python之crawlspider初探

    注意点: """ 1.用命令创建一个crawlspider的模板:scrapy genspider -t crawl <爬虫名> <all_domain ...

  4. Linux 的零拷贝技术

    目录 文章目录 目录 Linux I/O 缓存背景 零拷贝技术(Zero-Copy) 参考文章 Linux I/O 缓存背景 当请求文件服务器的下载功能时,服务端程序所做的事情是:将服务器磁盘中的文件 ...

  5. Qt编写自定义控件18-魔法小鱼

    前言 上次发了个纯painter绘制的老鼠,那个就是qt目录下的demo,改的,只是比demo中的老鼠稍微胖一点,估计人到中年都发福吧.这次来一个魔法小鱼,这条鱼可以变换颜色,尾巴还会摇动,可以设定旋 ...

  6. MATLAB学习(三)元素访问和常用代数运算

    >> A=[1,2;3,4],B=[0,2;4,5] A = 1 2 3 4 B = 0 2 4 5 >> C=A>=B C = 1 1 0 0 >> D=A ...

  7. 用shader实现流动的水面(webgl)

    这段时间一直在看如何用shader绘制一个流动的水面,直接用贴图(高度图.法向贴图)实现的方法,这里就不讨论了. 搜了一大波博客资料,感觉存在如下一些问题: 1⃣️大多数资料都是基于opengl实现( ...

  8. iis管理器的程序应用池中没有Asp.NET v4.0

    然后 windows + r 输入 cmd 然后输入CD C:\Windows\Microsoft.NET\Framework64\v4.0.30319 然后 输入 aspnet_regiis.exe ...

  9. vue中自定义指令的使用

    原文地址 vue中除了内置的指令(v-show,v-model)还允许我们自定义指令 想要创建自定义指令,就要注册指令(以输入框获取焦点为例) 一.注册全局指令: // 注册一个全局自定义指令 `v- ...

  10. 【网易微专业】图表绘制工具Matplotlib

    01 与图片的交互方式设置 这一小节简要介绍一下Matplotlib的交互方式 import pandas as pd import numpy as np import matplotlib.pyp ...