Bug 1459592 - error : Unable to move /dev/log mount to /var/run/libvirt/qemu/instance-00000002.log: No such file or directory
error : Unable to move /dev/log mount to /var/run/libvirt/qemu/instance-00000...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt (Show other bugs)
7.4
Unspecified Unspecified
high Severity unspecified
: rc
: ---
Assigned To: Michal Privoznik
yafu
: Upstream, ZStream
: 1463625 (view as bug list)
Depends On:
Blocks: 1471660
  Show dependency treegraph
 
Reported: 2017-06-07 10:20 EDT by Alexander Chuzhoy
Modified: 2018-04-10 06:48 EDT (History)
32 users (show)

See Also:
Fixed In Version: libvirt-3.7.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1471660 (view as bug list)
Environment:
Last Closed: 2018-04-10 06:46:43 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
libvirt logs with debug enabled from the nova_libvirt container (78.63 KB, application/x-gzip)
2017-06-13 11:58 EDT, Alexander Chuzhoy
no flags Details
/var/log/containers/nova/nova-compute.log (374.09 KB, application/x-gzip)
2017-06-20 13:24 EDT, Alexander Chuzhoy
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0704 None None None 2018-04-10 06:48 EDT

  None (edit)
Description Alexander Chuzhoy 2017-06-07 10:20:10 EDT
openstack-nova: OSP12 failing to launch instance on overcloud (libvirtError: internal error)

Environment:
libvirt-daemon-driver-nwfilter-3.2.0-6.el7.x86_64
libvirt-client-3.2.0-6.el7.x86_64
openstack-nova-novncproxy-16.0.0-0.20170521033533.99bd334.el7ost.noarch
libvirt-daemon-driver-storage-logical-3.2.0-6.el7.x86_64
libvirt-libs-3.2.0-6.el7.x86_64
libvirt-daemon-config-nwfilter-3.2.0-6.el7.x86_64
python-nova-16.0.0-0.20170521033533.99bd334.el7ost.noarch
openstack-nova-conductor-16.0.0-0.20170521033533.99bd334.el7ost.noarch
libvirt-daemon-driver-storage-mpath-3.2.0-6.el7.x86_64
puppet-nova-11.1.0-0.20170520132314.882caca.el7ost.noarch
libvirt-daemon-3.2.0-6.el7.x86_64
libvirt-daemon-driver-nodedev-3.2.0-6.el7.x86_64
libvirt-daemon-driver-lxc-3.2.0-6.el7.x86_64
libvirt-daemon-kvm-3.2.0-6.el7.x86_64
openstack-nova-migration-16.0.0-0.20170521033533.99bd334.el7ost.noarch
openstack-nova-api-16.0.0-0.20170521033533.99bd334.el7ost.noarch
libvirt-3.2.0-6.el7.x86_64
libvirt-daemon-driver-storage-core-3.2.0-6.el7.x86_64
libvirt-daemon-driver-storage-rbd-3.2.0-6.el7.x86_64
libvirt-daemon-driver-storage-3.2.0-6.el7.x86_64
python-novaclient-8.0.0-0.20170517113627.e1b9e76.el7ost.noarch
libvirt-daemon-driver-network-3.2.0-6.el7.x86_64
libvirt-daemon-driver-interface-3.2.0-6.el7.x86_64
libvirt-daemon-config-network-3.2.0-6.el7.x86_64
openstack-nova-compute-16.0.0-0.20170521033533.99bd334.el7ost.noarch
openstack-nova-placement-api-16.0.0-0.20170521033533.99bd334.el7ost.noarch
openstack-nova-console-16.0.0-0.20170521033533.99bd334.el7ost.noarch
libvirt-daemon-driver-storage-scsi-3.2.0-6.el7.x86_64
libvirt-daemon-driver-storage-iscsi-3.2.0-6.el7.x86_64
libvirt-daemon-driver-secret-3.2.0-6.el7.x86_64
openstack-nova-common-16.0.0-0.20170521033533.99bd334.el7ost.noarch
openstack-nova-scheduler-16.0.0-0.20170521033533.99bd334.el7ost.noarch
libvirt-daemon-driver-storage-gluster-3.2.0-6.el7.x86_64
libvirt-python-3.2.0-2.el7.x86_64
libvirt-daemon-driver-qemu-3.2.0-6.el7.x86_64
libvirt-daemon-driver-storage-disk-3.2.0-6.el7.x86_64
openstack-tripleo-heat-templates-7.0.0-0.20170521033302.el7ost.noarch
openstack-puppet-modules-10.0.0-0.20170315222135.0333c73.el7.1.noarch
instack-undercloud-7.0.0-0.20170521005009.el7ost.noarch


Steps to reproduce:
1) deploy overcloud with containers:
openstack overcloud deploy --templates \
    --libvirt-type kvm \
    -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/hostnames.yml \
-e /home/stack/virt/docker-osp12.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/nodes_data.yaml 

2) On overcloud, add cirros image and try to boot an instance without attaching to networks:
 openstack server create --flavor 1 --image cirros nisim2

Result:
(overcloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+--------+--------+----------+------------+
| ID                                   | Name   | Status | Networks | Image Name |
+--------------------------------------+--------+--------+----------+------------+
| d9ca7e44-c18d-4e23-9d09-1cbcdd029128 | nisim2 | ERROR  |          | cirros     |
+--------------------------------------+--------+--------+----------+------------+



(overcloud) [stack@undercloud-0 ~]$ openstack server show nisim2
+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                               | Value                                                                                                                                                |
+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                                                                                                               |
| OS-EXT-AZ:availability_zone         | nova                                                                                                                                                 |
| OS-EXT-SRV-ATTR:host                | None                                                                                                                                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                                                                                                                 |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000002                                                                                                                                    |
| OS-EXT-STS:power_state              | NOSTATE                                                                                                                                              |
| OS-EXT-STS:task_state               | None                                                                                                                                                 |
| OS-EXT-STS:vm_state                 | error                                                                                                                                                |
| OS-SRV-USG:launched_at              | None                                                                                                                                                 |
| OS-SRV-USG:terminated_at            | None                                                                                                                                                 |
| accessIPv4                          |                                                                                                                                                      |
| accessIPv6                          |                                                                                                                                                      |
| addresses                           |                                                                                                                                                      |
| config_drive                        |                                                                                                                                                      |
| created                             | 2017-06-07T14:04:55Z                                                                                                                                 |
| fault                               | {u'message': u'No valid host was found. There are not enough hosts available.', u'code': 500, u'details': u'  File "/usr/lib/python2.7/site-         |
|                                     | packages/nova/conductor/manager.py", line 561, in build_instances\n    context, request_spec, filter_properties)\n  File "/usr/lib/python2.7/site-   |
|                                     | packages/nova/conductor/manager.py", line 637, in _schedule_instances\n    hosts = self.scheduler_client.select_destinations(context, spec_obj)\n    |
|                                     | File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 371, in wrapped\n    return func(*args, **kwargs)\n  File "/usr/lib/python2.7  |
|                                     | /site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations\n    return self.queryclient.select_destinations(context,         |
|                                     | spec_obj)\n  File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method\n    return getattr(self.instance,  |
|                                     | __name)(*args, **kwargs)\n  File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations\n    return      |
|                                     | self.scheduler_rpcapi.select_destinations(context, spec_obj)\n  File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 130, in       |
|                                     | select_destinations\n    return cctxt.call(ctxt, \'select_destinations\', **msg_args)\n  File "/usr/lib/python2.7/site-                              |
|                                     | packages/oslo_messaging/rpc/client.py", line 169, in call\n    retry=self.retry)\n  File "/usr/lib/python2.7/site-                                   |
|                                     | packages/oslo_messaging/transport.py", line 98, in _send\n    timeout=timeout, retry=retry)\n  File "/usr/lib/python2.7/site-                        |
|                                     | packages/oslo_messaging/_drivers/amqpdriver.py", line 505, in send\n    retry=retry)\n  File "/usr/lib/python2.7/site-                               |
|                                     | packages/oslo_messaging/_drivers/amqpdriver.py", line 496, in _send\n    raise result\n', u'created': u'2017-06-07T14:05:00Z'}                       |
| flavor                              | m1.tiny (1)                                                                                                                                          |
| hostId                              |                                                                                                                                                      |
| id                                  | d9ca7e44-c18d-4e23-9d09-1cbcdd029128                                                                                                                 |
| image                               | cirros (227ba822-96c6-4ebc-806b-69c205128a09)                                                                                                        |
| key_name                            | None                                                                                                                                                 |
| name                                | nisim2                                                                                                                                               |
| project_id                          | fab8aeea304b40ffbfbfcb61688cd1e7                                                                                                                     |
| properties                          |                                                                                                                                                      |
| status                              | ERROR                                                                                                                                                |
| updated                             | 2017-06-07T14:05:00Z                                                                                                                                 |
| user_id                             | 4d6e9684d77c43f88c2c3177019433be                                                                                                                     |
| volumes_attached                    |                                                                                                                                                      |
+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ 





Looking in the nova container's log:
2017-06-07 10:04:58.657 1 ERROR nova.virt.libvirt.driver [req-d798240b-b83b-4150-81c8-f92fd9344642 4d6e9684d77c43f88c2c3177019433be fab8aeea304b40ffbfbfcb61688cd1e7 - - -] [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128] Failed to start libvirt guest
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [req-d798240b-b83b-4150-81c8-f92fd9344642 4d6e9684d77c43f88c2c3177019433be fab8aeea304b40ffbfbfcb61688cd1e7 - - -] [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128] Instance failed to spawn
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128] Traceback (most recent call last):
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2157, in _build_resources
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     yield resources
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1963, in _build_and_run_instance
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     block_device_info=block_device_info)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2740, in spawn
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     destroy_disks_on_failure=True)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5159, in _create_domain_and_network
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     destroy_disks_on_failure)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     self.force_reraise()
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     six.reraise(self.type_, self.value, self.tb)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5131, in _create_domain_and_network
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     post_xml_callback=post_xml_callback)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5049, in _create_domain
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     guest.launch(pause=pause)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 146, in launch
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     self._encoded_xml, errors='ignore')
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     self.force_reraise()
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     six.reraise(self.type_, self.value, self.tb)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 141, in launch
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     return self._domain.createWithFlags(flags)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     rv = execute(f, *args, **kwargs)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     six.reraise(c, e, tb)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     rv = meth(*args, **kwargs)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128] libvirtError: internal error: Process exited prior to exec: 029128,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000002/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/d9ca7e44-c18d-4e23-9d09-1cbcdd029128/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -add-fd set=0,fd=28 -chardev file,id=charserial0,path=/dev/fdset/0,append=on -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.17.1.14:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,ad
2017-06-07 10:04:58.882 1 ERROR nova.compute.manager [instance: d9ca7e44-c18d-4e23-9d09-1cbcdd029128]
Comment 2 Sahid Ferdjaoui 2017-06-07 12:59:28 EDT
libvirt seems to complain with:

  libvirt:  error : Unable to move /dev/log mount to /var/run/libvirt/qemu/instance-00000002.log: No such file or directory

Let see if we can find someone who can debug that thing.
Comment 3 Sahid Ferdjaoui 2017-06-07 13:21:01 EDT
That does not seem to be related to Nova. Daniel suggested me to reassign the issue to the libvirt team.
Comment 5 Michal Privoznik 2017-06-13 10:58:16 EDT
Alexander,

what is the /dev/log file? Who creates it? To me, this bug looks very similar to bug 1431112 but that one should be fixed in libvirt-3.2.0 (yes, it is still in ASSIGNED because of some corner case).

Also, can you please attach libvirt debug logs?

http://wiki.libvirt.org/page/DebugLogs

Thanks.
Comment 6 Alexander Chuzhoy 2017-06-13 11:58 EDT
Created attachment 1287375 [details]
libvirt logs with debug enabled from the nova_libvirt container
Comment 7 Alexander Chuzhoy 2017-06-13 12:01:38 EDT
Cole, (since you changed the title to mention /dev/log) could you please answer Michal's question in comment #5: what is the /dev/log file?
Comment 8 Cole Robinson 2017-06-13 12:09:09 EDT
(In reply to Alexander Chuzhoy from comment #7)
> Cole, (since you changed the title to mention /dev/log) could you please
> answer Michal's question in comment #5: what is the /dev/log file?

I just changed the title to the error message from Comment #2, I don't know anything about this bug, sorry
Comment 9 Michal Privoznik 2017-06-13 12:32:48 EDT
(In reply to Alexander Chuzhoy from comment #7)
> Cole, (since you changed the title to mention /dev/log) could you please
> answer Michal's question in comment #5: what is the /dev/log file?

Since libvirt tries to move it, it must be a mount point. What does `mount` say? Or perhaps fstab?

Also, can you please attach the domain logs? Usually, you can find them under /var/log/libvirt/qemu/$name.log.

Have you tried with the latest libvirt package?
Comment 10 Alexander Chuzhoy 2017-06-13 13:03:56 EDT
()[root@compute-0 /]# mount
/dev/mapper/docker-252:2-29360260-6e5de1af9aadf7686f5ca7d92e180b99301bf72994685af9fffb12d1e4c9636a on / type xfs (rw,relatime,seclabel,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /run/user/979 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=594596k,mode=700,uid=979,gid=975)
proc on /run/docker/netns/default type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=594596k,mode=700,uid=1000,gid=1000)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=2943600k,nr_inodes=735900,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
/dev/vda2 on /usr/share/zoneinfo/UTC type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
devtmpfs on /dev/log type devtmpfs (rw,nosuid,seclabel,size=2943600k,nr_inodes=735900,mode=755)
/dev/vda2 on /etc/libvirt type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /usr/lib/modules type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k)
/dev/vda2 on /etc/hosts type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /run/secrets type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
tmpfs on /run/libvirt type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
/dev/vda2 on /var/lib/nova type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /etc/libvirt/qemu type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /var/log/nova type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /var/lib/libvirt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /etc/pki/ca-trust/extracted type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /var/lib/kolla/config_files/config.json type xfs (ro,relatime,seclabel,attr2,inode64,noquota)
/dev/vda2 on /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt type xfs (ro,relatime,seclabel,attr2,inode64,noquota)



docker inspect nova_libvirt
[
    {
        "Id": "6a5fae1d823dfc435a564c624818c371d0944cfd19b4b92d02066b8fc5c3af7a",
        "Created": "2017-06-12T14:44:12.719338068Z",
        "Path": "kolla_start",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 8406,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-06-13T15:51:36.643927429Z",
            "FinishedAt": "2017-06-13T15:51:33.250739696Z"
        },
        "Image": "sha256:75c6c11121a48411a0e2e6de9af8c18d4aaa3b13fd892256f499952b93738972",
        "ResolvConfPath": "/var/lib/docker/containers/6a5fae1d823dfc435a564c624818c371d0944cfd19b4b92d02066b8fc5c3af7a/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/6a5fae1d823dfc435a564c624818c371d0944cfd19b4b92d02066b8fc5c3af7a/hostname",
        "HostsPath": "/etc/hosts",
        "LogPath": "",
        "Name": "/nova_libvirt",
        "RestartCount": 0,
        "Driver": "devicemapper",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/sys/fs/cgroup:/sys/fs/cgroup",
                "/dev:/dev",
                "/var/lib/libvirt:/var/lib/libvirt",
                "/var/log/containers/nova:/var/log/nova",
                "/etc/localtime:/etc/localtime:ro",
                "/dev/log:/dev/log",
                "/lib/modules:/lib/modules:ro",
                "/var/lib/nova:/var/lib/nova",
                "/var/run/libvirt:/var/run/libvirt",
                "/etc/libvirt/qemu:/etc/libvirt/qemu",
                "/etc/hosts:/etc/hosts:ro",
                "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro",
                "/var/lib/kolla/config_files/nova-libvirt.json:/var/lib/kolla/config_files/config.json:ro",
                "/var/lib/config-data/nova_libvirt/etc/libvirt/:/etc/libvirt/:ro",
                "/run:/run",
                "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro",
                "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro",
                "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "journald",
                "Config": {}
            },
            "NetworkMode": "host",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "always",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "host",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "label=disable"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "docker-runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "devicemapper",
            "Data": {
                "DeviceId": "20",
                "DeviceName": "docker-252:2-29360260-6e5de1af9aadf7686f5ca7d92e180b99301bf72994685af9fffb12d1e4c9636a",
                "DeviceSize": "10737418240"
            }
        },
        "Mounts": [
            {
                "Source": "/var/log/containers/nova",
                "Destination": "/var/log/nova",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/dev/log",
                "Destination": "/dev/log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/nova",
                "Destination": "/var/lib/nova",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/pki/ca-trust/extracted",
                "Destination": "/etc/pki/ca-trust/extracted",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/pki/tls/cert.pem",
                "Destination": "/etc/pki/tls/cert.pem",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/sys/fs/cgroup",
                "Destination": "/sys/fs/cgroup",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kolla/config_files/nova-libvirt.json",
                "Destination": "/var/lib/kolla/config_files/config.json",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/pki/tls/certs/ca-bundle.trust.crt",
                "Destination": "/etc/pki/tls/certs/ca-bundle.trust.crt",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/libvirt",
                "Destination": "/var/lib/libvirt",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/run/libvirt",
                "Destination": "/var/run/libvirt",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/hosts",
                "Destination": "/etc/hosts",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/run",
                "Destination": "/run",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/pki/tls/certs/ca-bundle.crt",
                "Destination": "/etc/pki/tls/certs/ca-bundle.crt",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/dev",
                "Destination": "/dev",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/localtime",
                "Destination": "/etc/localtime",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/lib/modules",
                "Destination": "/lib/modules",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/libvirt/qemu",
                "Destination": "/etc/libvirt/qemu",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/config-data/nova_libvirt/etc/libvirt",
                "Destination": "/etc/libvirt",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "compute-0.localdomain",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "container=oci",
                "KOLLA_BASE_DISTRO=centos",
                "KOLLA_INSTALL_TYPE=binary",
                "KOLLA_INSTALL_METATYPE=rhos",
                "PS1=$(tput bold)($(printenv KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ "
            ],
            "Cmd": [
                "kolla_start"
            ],
            "Image": "192.168.24.1:8787/rhosp12/openstack-nova-libvirt-docker:2017-06-01.5",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "Kolla-SHA": "4.0.0.0rc1-348-g2666d16",
                "architecture": "x86_64",
                "authoritative-source-url": "registry.access.redhat.com",
                "build-date": "2017-06-01T17:39:46.862132",
                "com.redhat.build-host": "ip-10-29-120-160.ec2.internal",
                "com.redhat.component": "openstack-nova-libvirt-docker",
                "config_id": "bc9f2053-32f7-4724-badc-af9c58661169",
                "container_name": "nova_libvirt",
                "deploy_resource_name": "0",
                "deploy_stack_id": "overcloud-AllNodesDeploySteps-ct6nqbghmnnz-ComputeContainersDeployment_Step3-kl4izhtl2l4s/b85c4d7f-ff9d-414c-92cf-e29503ae6a63",
                "description": "The Red Hat Enterprise Linux Base image is designed to be a fully supported foundation for your containerized applications.  This base image provides your operations and application teams with the packages, language runtimes and tools necessary to run, maintain, and troubleshoot all of your applications. This image is maintained by Red Hat and updated regularly. It is designed and engineered to be the base layer for all of your containerized applications, middleware and utilites. When used as the source for all of your containers, only one copy will ever be downloaded and cached in your production environment. Use this image just like you would a regular Red Hat Enterprise Linux distribution. Tools like yum, gzip, and bash are provided by default. For further information on how this image was built look at the /root/anacanda-ks.cfg file.",
                "distribution-scope": "public",
                "io.k8s.display-name": "Red Hat Enterprise Linux 7",
                "io.openshift.tags": "base rhel7",
                "kolla_version": "master",
                "managed_by": "docker-cmd",
                "name": "rhosp12-tech-preview/rhel-binary-nova-libvirt",
                "release": "20170601.1",
                "summary": "Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.",
                "vcs-ref": "7e3fd490f08a77a1d99381d118338e1356d4b938",
                "vcs-type": "git",
                "vendor": "Red Hat, Inc.",
                "version": "12.0"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "417dbada21d0cd1116cf6d6849b5b47a1b48ec4be5bde840deae197f5f9c2ce6",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/default",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "host": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "96eef657de728cc2173c738404e977ce49f735642dc7d7510ed4606606761066",
                    "EndpointID": "5760bc16e710b055dcc229b51850e1a842005b47936f1e520e2d5ba2dde978fa",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }
            }
        }
    }
]


Doesn't look domain was created - not sure if I should look on container (the mentioned log isn't available, nor the virsh command) or the compute node, where the virsh list --all returns an empty list.
Comment 11 Michal Privoznik 2017-06-14 11:55:42 EDT
(In reply to Alexander Chuzhoy from comment #10)
> ()[root@compute-0 /]# mount
> /dev/mapper/docker-252:2-29360260-
> 6e5de1af9aadf7686f5ca7d92e180b99301bf72994685af9fffb12d1e4c9636a on / type
> xfs
> (rw,relatime,seclabel,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,
> noquota)
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
> tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
> tmpfs on /run/user/979 type tmpfs
> (rw,nosuid,nodev,relatime,seclabel,size=594596k,mode=700,uid=979,gid=975)
> proc on /run/docker/netns/default type proc (rw,nosuid,nodev,noexec,relatime)
> tmpfs on /run/user/1000 type tmpfs
> (rw,nosuid,nodev,relatime,seclabel,size=594596k,mode=700,uid=1000,gid=1000)
> devtmpfs on /dev type devtmpfs
> (rw,nosuid,seclabel,size=2943600k,nr_inodes=735900,mode=755)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
> devpts on /dev/pts type devpts
> (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
> mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
> /dev/vda2 on /usr/share/zoneinfo/UTC type xfs
> (ro,relatime,seclabel,attr2,inode64,noquota)
> devtmpfs on /dev/log type devtmpfs
> (rw,nosuid,seclabel,size=2943600k,nr_inodes=735900,mode=755)

So /dev/log is a mount point. As I suspected. However, stupid kernel behaviour doesn't show where does this actually come from. Anyway, I've prepared an upstream build with some more debug messages inserted at some places (hoping it'll let me see what is going on):

https://mprivozn.fedorapeople.org/openstack/

Can you please give it a try?

> 
> docker inspect nova_libvirt
> [
>     {
>         "Id":
> "6a5fae1d823dfc435a564c624818c371d0944cfd19b4b92d02066b8fc5c3af7a",
>         "Created": "2017-06-12T14:44:12.719338068Z",
>         "Path": "kolla_start",
>         "Args": [],
>         "State": {
>             "Status": "running",
>             "Running": true,
>             "Paused": false,
>             "Restarting": false,
>             "OOMKilled": false,
>             "Dead": false,
>             "Pid": 8406,
>             "ExitCode": 0,
>             "Error": "",
>             "StartedAt": "2017-06-13T15:51:36.643927429Z",
>             "FinishedAt": "2017-06-13T15:51:33.250739696Z"
>         },
>         "Image":
> "sha256:75c6c11121a48411a0e2e6de9af8c18d4aaa3b13fd892256f499952b93738972",
>         "ResolvConfPath":
> "/var/lib/docker/containers/
> 6a5fae1d823dfc435a564c624818c371d0944cfd19b4b92d02066b8fc5c3af7a/resolv.
> conf",
>         "HostnamePath":
> "/var/lib/docker/containers/
> 6a5fae1d823dfc435a564c624818c371d0944cfd19b4b92d02066b8fc5c3af7a/hostname",
>         "HostsPath": "/etc/hosts",
>         "LogPath": "",
>         "Name": "/nova_libvirt",
>         "RestartCount": 0,
>         "Driver": "devicemapper",
>         "MountLabel": "",
>         "ProcessLabel": "",
>         "AppArmorProfile": "",
>         "ExecIDs": null,
>         "HostConfig": {
>             "Binds": [
>                 "/sys/fs/cgroup:/sys/fs/cgroup",
>                 "/dev:/dev",
>                 "/var/lib/libvirt:/var/lib/libvirt",
>                 "/var/log/containers/nova:/var/log/nova",
>                 "/etc/localtime:/etc/localtime:ro",
>                 "/dev/log:/dev/log",

This looks interesting.

>                 "/lib/modules:/lib/modules:ro",
>                 "/var/lib/nova:/var/lib/nova",
>                 "/var/run/libvirt:/var/run/libvirt",
>                 "/etc/libvirt/qemu:/etc/libvirt/qemu",
>                 "/etc/hosts:/etc/hosts:ro",
>                 "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro",
>                
> "/var/lib/kolla/config_files/nova-libvirt.json:/var/lib/kolla/config_files/
> config.json:ro",
>                
> "/var/lib/config-data/nova_libvirt/etc/libvirt/:/etc/libvirt/:ro",
>                 "/run:/run",
>                
> "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro",
>                
> "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.
> crt:ro",
>                 "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro"
>             ],


>         "Mounts": [
>             {
>                 "Source": "/var/log/containers/nova",
>                 "Destination": "/var/log/nova",
>                 "Mode": "",
>                 "RW": true,
>                 "Propagation": "rprivate"
>             },
>             {
>                 "Source": "/dev/log",
>                 "Destination": "/dev/log",
>                 "Mode": "",
>                 "RW": true,
>                 "Propagation": "rprivate"
>             },

And so does this one.

> 
> 
> Doesn't look domain was created - not sure if I should look on container
> (the mentioned log isn't available, nor the virsh command) or the compute
> node, where the virsh list --all returns an empty list.

You're running libvirt inside a container? Firstly, this is not really supported yet. We are working on that. Libvirtd expects full access to the host, otherwise we cannot make guarantees (e.g. when it comes to device assignment). Anyway, if you have followed the wiki page I am linking in comment 5 the domain log should be there. You might need to look into the container if you run libvirtd from within one.
Comment 13 Michal Privoznik 2017-06-15 11:49:12 EDT
(In reply to Michal Privoznik from comment #11)
>

Maybe I know what is going on. At least, the symptoms are very similar to

https://bugzilla.redhat.com/show_bug.cgi?id=1431112#c11

/dev/log is also a file. However, I'm not quite sure how to fix this. We'd need to bind mount /dev/log from parent namespace onto /dev/log in qemu ns. So I've created a patch that binds mount /dev/log to a temp location, forks and enters the qemu's namespace and tries to move the mount point from temp location onto /dev/log. But mounts are not getting propagated. I though that this might be yet another problem in libvirt code, but it turns out I can reproduce the issue even from bare cmd line:

# mount --make-rshared /
# mount | grep floppy

# unshare -m /bin/bash
# mount --make-rslave /

Now mount floppy from a different terminal:

# mount /dev/sdb1 /mnt/floppy/
# mount | grep floppy
/dev/sdb1 on /mnt/floppy type ext4 (rw,relatime,data=ordered)


At this point, mount should have been propagated into namespace. Well, it isn't:

# mount | grep floppy


I'm wondering if this is a kernel bug or I'm missing something.
Comment 14 Daniel Berrange 2017-06-15 12:07:10 EDT
(In reply to Michal Privoznik from comment #13)
> (In reply to Michal Privoznik from comment #11)
> >
> 
> Maybe I know what is going on. At least, the symptoms are very similar to
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1431112#c11
> 
> /dev/log is also a file. However, I'm not quite sure how to fix this. We'd
> need to bind mount /dev/log from parent namespace onto /dev/log in qemu ns.
> So I've created a patch that binds mount /dev/log to a temp location, forks
> and enters the qemu's namespace and tries to move the mount point from temp
> location onto /dev/log. But mounts are not getting propagated. I though that
> this might be yet another problem in libvirt code, but it turns out I can
> reproduce the issue even from bare cmd line:

I'm not seeing why we need any access to the host namespace for this.

We should be able to just MS_MOVE the existing /dev/log that we see, into our new /dev directory, in the same way that we do for other sub-mounts of /dev like /dev/shm or /dev/pts, etc. The only difference is that we must make sure we touch a /dev/log file, instead of mkdir an empty /dev/log directory, since the source is a file not a dir mount.
Comment 15 Michal Privoznik 2017-06-16 03:34:09 EDT
(In reply to Daniel Berrange from comment #14)
> (In reply to Michal Privoznik from comment #13)
> > (In reply to Michal Privoznik from comment #11)
> > >
> > 
> > Maybe I know what is going on. At least, the symptoms are very similar to
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=1431112#c11
> > 
> > /dev/log is also a file. However, I'm not quite sure how to fix this. We'd
> > need to bind mount /dev/log from parent namespace onto /dev/log in qemu ns.
> > So I've created a patch that binds mount /dev/log to a temp location, forks
> > and enters the qemu's namespace and tries to move the mount point from temp
> > location onto /dev/log. But mounts are not getting propagated. I though that
> > this might be yet another problem in libvirt code, but it turns out I can
> > reproduce the issue even from bare cmd line:
> 
> I'm not seeing why we need any access to the host namespace for this.
> 
> We should be able to just MS_MOVE the existing /dev/log that we see, into
> our new /dev directory, in the same way that we do for other sub-mounts of
> /dev like /dev/shm or /dev/pts, etc. The only difference is that we must
> make sure we touch a /dev/log file, instead of mkdir an empty /dev/log
> directory, since the source is a file not a dir mount.

Sure, this will work when starting up the domain. But what about hotplug? What if /dev/log is a channel that users want to plug in whilst the domain is running? In that case I need to see the host namespace and qemu namespace, don't I?
Comment 16 Daniel Berrange 2017-06-16 04:48:18 EDT
(In reply to Michal Privoznik from comment #15)
> Sure, this will work when starting up the domain. But what about hotplug?
> What if /dev/log is a channel that users want to plug in whilst the domain
> is running? In that case I need to see the host namespace and qemu
> namespace, don't I?

We just need to make sure we don't fully unmount the original /dev in the QEMU namespace - instead just move it to /var/run/libvirt/qemu/$GUEST/olddev  and leave it there forever so we can still use it later
Comment 17 Michal Privoznik 2017-06-16 04:56:15 EDT
(In reply to Daniel Berrange from comment #16)
> (In reply to Michal Privoznik from comment #15)
> > Sure, this will work when starting up the domain. But what about hotplug?
> > What if /dev/log is a channel that users want to plug in whilst the domain
> > is running? In that case I need to see the host namespace and qemu
> > namespace, don't I?
> 
> We just need to make sure we don't fully unmount the original /dev in the
> QEMU namespace - instead just move it to /var/run/libvirt/qemu/$GUEST/olddev
> and leave it there forever so we can still use it later

Not quite. Imagine I have an usb stick with some file on it and after domain is started I want to have one of the files as a new disk to the domain, I want to hotplug it. So I mount the usb stick, and that's about it. The mount is not propagated to qemu ns, therefore qemu will not see /mnt/stick/whatever.file.
Instead of some hacky approaches we really need to have mount events propagated correctly.
Comment 18 Michal Privoznik 2017-06-20 12:26:47 EDT
(In reply to Michal Privoznik from comment #11)
> <snip/>
> 
> So /dev/log is a mount point. As I suspected. However, stupid kernel
> behaviour doesn't show where does this actually come from. Anyway, I've
> prepared an upstream build with some more debug messages inserted at some
> places (hoping it'll let me see what is going on):
> 
> https://mprivozn.fedorapeople.org/openstack/
> 
> Can you please give it a try?

Actually, forget that one, I think I have a fix. It's available here:

https://mprivozn.fedorapeople.org/openstack2/

Can you please give it a try so that I can send my patches to the list? Thanks!
Comment 19 Alexander Chuzhoy 2017-06-20 13:22:45 EDT
Updated the following rpms:
sudo yum localinstall https://mprivozn.fedorapeople.org/openstack2/libvirt-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-interface-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-config-network-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-nwfilter-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-libs-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-3.5.0-1.el7.x86_64.rpm  https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-qemu-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-secret-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-lxc-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-network-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-kvm-3.5.0-1.el7.x86_64.rpm  https://mprivozn.fedorapeople.org/openstack2/libvirt-client-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-nodedev-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-mpath-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-iscsi-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-config-nwfilter-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-core-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-disk-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-scsi-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-core-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-gluster-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-logical-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-rbd-3.5.0-1.el7.x86_64.rpm


Trying to boot an instance:
(overcloud) [stack@undercloud-0 ~]$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ openstack server create --flavor 1 --image cirros vm1
REQ: curl -g -i -X GET http://10.0.0.104:8774/v2.1/flavors/1 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}9c2347433588952227bb3d9e6e99c0becac7508c"
RESP: [200] Content-Length: 354 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Openstack-Request-Id: req-88bd5f2e-a43f-401d-849f-bf53ed46d93e X-Compute-Request-Id: req-88bd5f2e-a43f-401d-849f-bf53ed46d93e Date: Tue, 20 Jun 2017 17:16:01 GMT 
RESP BODY: {"flavor": {"links": [{"href": "http://10.0.0.104:8774/v2.1/flavors/1", "rel": "self"}, {"href": "http://10.0.0.104:8774/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "disk": 1, "id": "1", "name": "m1.tiny", "vcpus": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0}}

GET call to compute for http://10.0.0.104:8774/v2.1/flavors/1 used request id req-88bd5f2e-a43f-401d-849f-bf53ed46d93e
REQ: curl -g -i -X POST http://10.0.0.104:8774/v2.1/servers -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}9c2347433588952227bb3d9e6e99c0becac7508c" -d '{"server": {"min_count": 1, "flavorRef": "1", "name": "vm1", "imageRef": "a9f38bfa-72d3-48c0-b340-71431f78f758", "max_count": 1}}'
RESP: [202] Content-Length: 372 Location: http://10.0.0.104:8774/v2.1/servers/944f9843-af1c-463e-99ce-9a62bdb81484 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Openstack-Request-Id: req-26b2e430-5d4e-451e-b947-a23675a4fda5 X-Compute-Request-Id: req-26b2e430-5d4e-451e-b947-a23675a4fda5 Date: Tue, 20 Jun 2017 17:16:02 GMT 
RESP BODY: {"server": {"security_groups": [{"name": "default"}], "OS-DCF:diskConfig": "MANUAL", "id": "944f9843-af1c-463e-99ce-9a62bdb81484", "links": [{"href": "http://10.0.0.104:8774/v2.1/servers/944f9843-af1c-463e-99ce-9a62bdb81484", "rel": "self"}, {"href": "http://10.0.0.104:8774/servers/944f9843-af1c-463e-99ce-9a62bdb81484", "rel": "bookmark"}], "adminPass": "qyFrYkxXw62S"}}

POST call to compute for http://10.0.0.104:8774/v2.1/servers used request id req-26b2e430-5d4e-451e-b947-a23675a4fda5
REQ: curl -g -i -X GET http://10.0.0.104:8774/v2.1/servers/944f9843-af1c-463e-99ce-9a62bdb81484 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}9c2347433588952227bb3d9e6e99c0becac7508c"
RESP: [200] Content-Length: 1263 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Openstack-Request-Id: req-4f28abb5-abf5-4208-ac21-658f055d9789 X-Compute-Request-Id: req-4f28abb5-abf5-4208-ac21-658f055d9789 Date: Tue, 20 Jun 2017 17:16:02 GMT 
RESP BODY: {"server": {"OS-EXT-STS:task_state": "scheduling", "addresses": {}, "links": [{"href": "http://10.0.0.104:8774/v2.1/servers/944f9843-af1c-463e-99ce-9a62bdb81484", "rel": "self"}, {"href": "http://10.0.0.104:8774/servers/944f9843-af1c-463e-99ce-9a62bdb81484", "rel": "bookmark"}], "image": {"id": "a9f38bfa-72d3-48c0-b340-71431f78f758", "links": [{"href": "http://10.0.0.104:8774/images/a9f38bfa-72d3-48c0-b340-71431f78f758", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "building", "OS-EXT-SRV-ATTR:instance_name": "", "OS-SRV-USG:launched_at": null, "flavor": {"id": "1", "links": [{"href": "http://10.0.0.104:8774/flavors/1", "rel": "bookmark"}]}, "id": "944f9843-af1c-463e-99ce-9a62bdb81484", "user_id": "8e8a2bb4a3dc4abfbcc123c2b2e36d4a", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "OS-EXT-AZ:availability_zone": "", "config_drive": "", "status": "BUILD", "updated": "2017-06-20T17:16:02Z", "hostId": "", "OS-EXT-SRV-ATTR:host": null, "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "vm1", "created": "2017-06-20T17:16:01Z", "tenant_id": "06c8b5997a8649159cadcd10c8176b2f", "os-extended-volumes:volumes_attached": [], "metadata": {}}}

GET call to compute for http://10.0.0.104:8774/v2.1/servers/944f9843-af1c-463e-99ce-9a62bdb81484 used request id req-4f28abb5-abf5-4208-ac21-658f055d9789
REQ: curl -g -i -X GET http://10.0.0.104:8774/v2.1/flavors/1 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}9c2347433588952227bb3d9e6e99c0becac7508c"
RESP: [200] Content-Length: 354 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Openstack-Request-Id: req-9b5a9ac8-110a-49e3-b7c8-c1b61124599d X-Compute-Request-Id: req-9b5a9ac8-110a-49e3-b7c8-c1b61124599d Date: Tue, 20 Jun 2017 17:16:02 GMT 
RESP BODY: {"flavor": {"links": [{"href": "http://10.0.0.104:8774/v2.1/flavors/1", "rel": "self"}, {"href": "http://10.0.0.104:8774/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "disk": 1, "id": "1", "name": "m1.tiny", "vcpus": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0}}

GET call to compute for http://10.0.0.104:8774/v2.1/flavors/1 used request id req-9b5a9ac8-110a-49e3-b7c8-c1b61124599d
+-------------------------------------+-----------------------------------------------+
| Field                               | Value                                         |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                        |
| OS-EXT-AZ:availability_zone         |                                               |
| OS-EXT-SRV-ATTR:host                | None                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                          |
| OS-EXT-SRV-ATTR:instance_name       |                                               |
| OS-EXT-STS:power_state              | NOSTATE                                       |
| OS-EXT-STS:task_state               | scheduling                                    |
| OS-EXT-STS:vm_state                 | building                                      |
| OS-SRV-USG:launched_at              | None                                          |
| OS-SRV-USG:terminated_at            | None                                          |
| accessIPv4                          |                                               |
| accessIPv6                          |                                               |
| addresses                           |                                               |
| adminPass                           | qyFrYkxXw62S                                  |
| config_drive                        |                                               |
| created                             | 2017-06-20T17:16:01Z                          |
| flavor                              | m1.tiny (1)                                   |
| hostId                              |                                               |
| id                                  | 944f9843-af1c-463e-99ce-9a62bdb81484          |
| image                               | cirros (a9f38bfa-72d3-48c0-b340-71431f78f758) |
| key_name                            | None                                          |
| name                                | vm1                                           |
| progress                            | 0                                             |
| project_id                          | 06c8b5997a8649159cadcd10c8176b2f              |
| properties                          |                                               |
| security_groups                     | name='default'                                |
| status                              | BUILD                                         |
| updated                             | 2017-06-20T17:16:02Z                          |
| user_id                             | 8e8a2bb4a3dc4abfbcc123c2b2e36d4a              |
| volumes_attached                    |                                               |
+-------------------------------------+-----------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID                                   | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| 944f9843-af1c-463e-99ce-9a62bdb81484 | vm1  | BUILD  | spawning   | NOSTATE     |          |
+--------------------------------------+------+--------+------------+-------------+----------+
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID                                   | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| 944f9843-af1c-463e-99ce-9a62bdb81484 | vm1  | ERROR  | -          | NOSTATE     |          |
+--------------------------------------+------+--------+------------+-------------+----------+
(overcloud) [stack@undercloud-0 ~]$ nova show vm1
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property                             | Value                                                                                                                                                                                                      |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                                                                                                                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                                                                                                                                                                       |
| OS-EXT-SRV-ATTR:host                 | -                                                                                                                                                                                                          |
| OS-EXT-SRV-ATTR:hostname             | vm1                                                                                                                                                                                                        |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                                                                                                                                                                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                                                                                                                                                                                          |
| OS-EXT-SRV-ATTR:kernel_id            |                                                                                                                                                                                                            |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                                                                                                                                                                          |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                                                                                                                                                                            |
| OS-EXT-SRV-ATTR:reservation_id       | r-rlsl7u2a                                                                                                                                                                                                 |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                                                                                                                                                                   |
| OS-EXT-SRV-ATTR:user_data            | -                                                                                                                                                                                                          |
| OS-EXT-STS:power_state               | 0                                                                                                                                                                                                          |
| OS-EXT-STS:task_state                | -                                                                                                                                                                                                          |
| OS-EXT-STS:vm_state                  | error                                                                                                                                                                                                      |
| OS-SRV-USG:launched_at               | -                                                                                                                                                                                                          |
| OS-SRV-USG:terminated_at             | -                                                                                                                                                                                                          |
| accessIPv4                           |                                                                                                                                                                                                            |
| accessIPv6                           |                                                                                                                                                                                                            |
| config_drive                         |                                                                                                                                                                                                            |
| created                              | 2017-06-20T17:16:01Z                                                                                                                                                                                       |
| description                          | vm1                                                                                                                                                                                                        |
| fault                                | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": "  File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 562, in build_instances |
|                                      |     context, request_spec, filter_properties, instance_uuids)                                                                                                                                              |
|                                      |   File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 640, in _schedule_instances                                                                                                    |
|                                      |     instance_uuids)                                                                                                                                                                                        |
|                                      |   File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 371, in wrapped                                                                                                                  |
|                                      |     return func(*args, **kwargs)                                                                                                                                                                           |
|                                      |   File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 52, in select_destinations                                                                                             |
|                                      |     instance_uuids)                                                                                                                                                                                        |
|                                      |   File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 37, in __run_method                                                                                                    |
|                                      |     return getattr(self.instance, __name)(*args, **kwargs)                                                                                                                                                 |
|                                      |   File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", line 33, in select_destinations                                                                                                |
|                                      |     instance_uuids)                                                                                                                                                                                        |
|                                      |   File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", line 136, in select_destinations                                                                                                     |
|                                      |     return cctxt.call(ctxt, 'select_destinations', **msg_args)                                                                                                                                             |
|                                      |   File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", line 169, in call                                                                                                                |
|                                      |     retry=self.retry)                                                                                                                                                                                      |
|                                      |   File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", line 123, in _send                                                                                                                |
|                                      |     timeout=timeout, retry=retry)                                                                                                                                                                          |
|                                      |   File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 505, in send                                                                                                       |
|                                      |     retry=retry)                                                                                                                                                                                           |
|                                      |   File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 496, in _send                                                                                                      |
|                                      |     raise result                                                                                                                                                                                           |
|                                      | ", "created": "2017-06-20T17:16:06Z"}                                                                                                                                                                      |
| flavor                               | m1.tiny (1)                                                                                                                                                                                                |
| hostId                               |                                                                                                                                                                                                            |
| host_status                          |                                                                                                                                                                                                            |
| id                                   | 944f9843-af1c-463e-99ce-9a62bdb81484                                                                                                                                                                       |
| image                                | cirros (a9f38bfa-72d3-48c0-b340-71431f78f758)                                                                                                                                                              |
| key_name                             | -                                                                                                                                                                                                          |
| locked                               | False                                                                                                                                                                                                      |
| metadata                             | {}                                                                                                                                                                                                         |
| name                                 | vm1                                                                                                                                                                                                        |
| os-extended-volumes:volumes_attached | []                                                                                                                                                                                                         |
| status                               | ERROR                                                                                                                                                                                                      |
| tags                                 | []                                                                                                                                                                                                         |
| tenant_id                            | 06c8b5997a8649159cadcd10c8176b2f                                                                                                                                                                           |
| updated                              | 2017-06-20T17:16:06Z                                                                                                                                                                                       |
| user_id                              | 8e8a2bb4a3dc4abfbcc123c2b2e36d4a                                                                                                                                                                           |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$

attchaing nova-compute.log for container with respective errors.
Comment 20 Alexander Chuzhoy 2017-06-20 13:24 EDT
Created attachment 1289757 [details]
/var/log/containers/nova/nova-compute.log
Comment 21 Michal Privoznik 2017-06-21 02:56:51 EDT
(In reply to Alexander Chuzhoy from comment #20)
> Created attachment 1289757 [details]
> /var/log/containers/nova/nova-compute.log

2017-06-20 13:16:03.138 1 ERROR nova.virt.libvirt.driver [req-26b2e430-5d4e-451e-b947-a23675a4fda5 8e8a2bb4a3dc4abfbcc123c2b2e36d4a 06c8b5997a8649159cadcd10c8176b2f - default default] [instance: 944f9843-af1c-463e-99ce-9a62bdb81484] Failed to start libvirt guest: libvirtError: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': No such file or directory

Looks like your virtlogd is not running. Can you start it and retry please?



There is one suspicious line even now:

2017-06-19 15:30:47.562 1 ERROR nova.virt.libvirt.driver [req-b51f0025-3b00-4977-be0e-43028ecec7a7 8e8a2bb4a3dc4abfbcc123c2b2e36d4a 06c8b5997a8649159cadcd10c8176b2f - default default] [instance: 9a3b612f-d457-42ea-bcc6-46a28042f385] Failed to start libvirt guest: libvirtError: internal error: Process exited prior to exec: bcc6-46a28042f385,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/9a3b612f-d457-42ea-bcc6-46a28042f385/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -add-fd set=0,fd=27 -chardev pty,id=charserial0,logfile=/dev/fdset/0,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.17.1.15:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
libvirt:  error : Unable to move /dev

However, this dates before I've even built the packages, so I think it's an old artefact in the logs. Also, usually nova logs are not enough to debug a libvirt issue.
Comment 22 Alexander Chuzhoy 2017-06-21 10:44:22 EDT
This is a containerized deployment, I can't start libvirt on compute and instead stop/start the container. 
Furthermore, I rebooted the compute node:

[root@compute-0 ~]# docker ps
CONTAINER ID        IMAGE                                                                               COMMAND             CREATED             STATUS              PORTS               NAMES
d12532d1a459        192.168.24.1:8787/rhosp12/openstack-neutron-openvswitch-agent-docker:2017-06-19.1   "kolla_start"       16 hours ago        Up 3 seconds                            neutron_ovs_agent
9fee333b6bef        192.168.24.1:8787/rhosp12/openstack-ceilometer-compute-docker:2017-06-19.1          "kolla_start"       16 hours ago        Up 4 minutes                            ceilometer_agent_compute
7c51afd32024        192.168.24.1:8787/rhosp12/openstack-nova-compute-docker:2017-06-19.1                "kolla_start"       16 hours ago        Up 4 minutes                            nova_compute
9239e77226a7        192.168.24.1:8787/rhosp12/openstack-nova-libvirt-docker:2017-06-19.1                "kolla_start"       17 hours ago        Up 4 minutes                            nova_libvirt



2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [req-d9c17445-4d7a-4485-ab27-c3a8141fc0ee 6b1bc499dc08454f857bdfedd1eb693c 41deab6d19f148108470004003ba1fee - default default] [instance: 7be8712f-4000-401f-b5c
d-f69d3d27470a] Instance failed to spawn: libvirtError: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': No such file or directory
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a] Traceback (most recent call last):
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2164, in _build_resources
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     yield resources
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1970, in _build_and_run_instance
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     block_device_info=block_device_info)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2751, in spawn
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     destroy_disks_on_failure=True)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5185, in _create_domain_and_network
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     destroy_disks_on_failure)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     self.force_reraise()
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     six.reraise(self.type_, self.value, self.tb)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5157, in _create_domain_and_network
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     post_xml_callback=post_xml_callback)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5075, in _create_domain
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     guest.launch(pause=pause)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 146, in launch
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     self._encoded_xml, errors='ignore')
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     self.force_reraise()
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     six.reraise(self.type_, self.value, self.tb)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 141, in launch
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     return self._domain.createWithFlags(flags)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     rv = execute(f, *args, **kwargs)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     six.reraise(c, e, tb)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     rv = meth(*args, **kwargs)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2017-06-21 10:41:36.340 1 ERROR nova.compute.manager [instance: 7be8712f-4000-401f-b5cd-f69d3d27470a] libvir
Comment 23 Alexander Chuzhoy 2017-06-21 10:45:58 EDT
The tree of cointainer logs:
/var/log/containers/
/var/log/containers/nova
/var/log/containers/nova/nova-compute.log
/var/log/containers/neutron
/var/log/containers/neutron/neutron-openvswitch-agent.log


What log would you need?
Comment 24 Michal Privoznik 2017-06-21 11:06:04 EDT
(In reply to Alexander Chuzhoy from comment #22)
> This is a containerized deployment, I can't start libvirt on compute and
> instead stop/start the container. 

So how do you update libvirt there? Regardless of that, you need virtlogd running. It can be started by the same process that starts libvirtd, for instance. Looks like a container issue to me. The container issuer should make sure all the needed services are running.
Comment 25 Michal Privoznik 2017-06-21 11:42:32 EDT
(In reply to Alexander Chuzhoy from comment #23)
> The tree of cointainer logs:
> /var/log/containers/
> /var/log/containers/nova
> /var/log/containers/nova/nova-compute.log
> /var/log/containers/neutron
> /var/log/containers/neutron/neutron-openvswitch-agent.log
> 
> 
> What log would you need?

Neither of those. Libvirt logs. As described in comment 5. Nova is way too high up the stack for efficient libvirt debugging.
Comment 26 Alexander Chuzhoy 2017-06-21 12:52:36 EDT
Tried to modify the container image used for nova_libvirt container:


Dockerfile:
FROM 192.168.24.1:8787/rhosp12/openstack-nova-libvirt-docker:2017-06-19.1
RUN yum localinstall -y https://mprivozn.fedorapeople.org/openstack2/libvirt-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-interface-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-config-network-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-nwfilter-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-libs-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-3.5.0-1.el7.x86_64.rpm  https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-qemu-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-secret-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-lxc-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-network-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-kvm-3.5.0-1.el7.x86_64.rpm  https://mprivozn.fedorapeople.org/openstack2/libvirt-client-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-nodedev-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-mpath-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-iscsi-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-config-nwfilter-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-core-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-disk-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-scsi-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-core-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-gluster-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-logical-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-rbd-3.5.0-1.el7.x86_64.rpm



The container fails to start, because of:
[root@compute-0 ~]# docker logs nova_libvirt
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
INFO:__main__:Validating config file
INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
INFO:__main__:Writing out command to execute
INFO:__main__:Setting permission for /var/log/nova
INFO:__main__:Setting permission for /var/log/nova/nova-compute.log
Running command: '/usr/sbin/libvirtd --config /etc/libvirt/libvirtd.conf'
2017-06-21 16:50:30.665+0000: 18579: info : libvirt version: 3.5.0, package: 1.el7 (Unknown, 2017-06-20-18:23:45, rhel7-next)
2017-06-21 16:50:30.665+0000: 18579: info : hostname: compute-0.localdomain
2017-06-21 16:50:30.665+0000: 18579: error : secretStateInitialize:469 : cannot create config directory '/etc/libvirt/secrets': Read-only file system
2017-06-21 16:50:30.665+0000: 18579: error : virStateInitialize:775 : Initialization of secret state driver failed: cannot create config directory '/etc/libvirt/secrets': Read-only file system
2017-06-21 16:50:30.665+0000: 18579: error : daemonRunStateInit:882 : Driver state initialization failed
Comment 27 Alexander Chuzhoy 2017-06-21 13:06:06 EDT
after running "mkdir /var/lib/config-data/nova_libvirt/etc/libvirt/secrets", was able to start the container nova_libvirt.
That was build with:




Dockerfile:
FROM 192.168.24.1:8787/rhosp12/openstack-nova-libvirt-docker:2017-06-19.1
RUN yum localinstall -y https://mprivozn.fedorapeople.org/openstack2/libvirt-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-interface-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-config-network-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-nwfilter-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-libs-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-3.5.0-1.el7.x86_64.rpm  https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-qemu-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-secret-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-lxc-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-network-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-kvm-3.5.0-1.el7.x86_64.rpm  https://mprivozn.fedorapeople.org/openstack2/libvirt-client-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-nodedev-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-mpath-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-iscsi-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-config-nwfilter-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-core-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-disk-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-scsi-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-core-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-gluster-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-logical-3.5.0-1.el7.x86_64.rpm https://mprivozn.fedorapeople.org/openstack2/libvirt-daemon-driver-storage-rbd-3.5.0-1.el7.x86_64.rpm



Running instance in overcloud still results in error.

The log is mounted this way:
/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro",

[root@compute-0 ~]# grep -i error /var/log/libvirt/qemu/instance-00000002.log 
libvirt:  error : Unable to move /dev/log mount to /var/run/libvirt/qemu/instance-00000002.log: No such file or directory



All mounts for container:
      "volumes": [
        "/etc/hosts:/etc/hosts:ro",
        "/etc/localtime:/etc/localtime:ro",
        "/etc/puppet:/etc/puppet:ro",
        "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro",
        "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro",
        "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro",
        "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro",
        "/dev/log:/dev/log",
        "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro",
        "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro",
        "/var/lib/config-data/nova_libvirt/etc/libvirt/:/etc/libvirt/:ro",
        "/lib/modules:/lib/modules:ro",
        "/dev:/dev",
        "/run:/run",
        "/sys/fs/cgroup:/sys/fs/cgroup",
        "/var/lib/nova:/var/lib/nova",
        "/var/run/libvirt:/var/run/libvirt",
        "/var/lib/libvirt:/var/lib/libvirt",
        "/etc/libvirt/qemu:/etc/libvirt/qemu",
        "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro",
        "/var/log/containers/nova:/var/log/nova"


starting command:
/bin/docker run -d --name nova_libvirt --pid=host --env=KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --volume=/etc/hosts:/etc/hosts:ro --volume=/etc/localtime:/etc/localtime:ro --volume=/etc/puppet:/etc/puppet:ro --volume=/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume=/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume=/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume=/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume=/dev/log:/dev/log --volume=/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume=/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro --volume=/var/lib/config-data/nova_libvirt/etc/libvirt/:/etc/libvirt/:ro --volume=/lib/modules:/lib/modules:ro --volume=/dev:/dev --volume=/run:/run --volume=/sys/fs/cgroup:/sys/fs/cgroup --volume=/var/lib/nova:/var/lib/nova --volume=/var/run/libvirt:/var/run/libvirt --volume=/var/lib/libvirt:/var/lib/libvirt --volume=/etc/libvirt/qemu:/etc/libvirt/qemu --volume=/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro --volume=/var/log/containers/nova:/var/log/nova --net=host --privileged=true 192.168.24.1:8787/rhosp12/openstack-nova-libvirt-docker:2017-06-19.1
Comment 28 Dan Prince 2017-06-21 13:34:46 EDT
We found a couple of issues due to the /home/stack/virt/docker-osp12.yaml Heat environment file used to create these setups and I'm wondering if this is also a remnant of issues in that config file. I cross reference the contents of that file with what we are setting upstream and I think the issue here might be that we are setting:

  DockerNovaConfigImage: openstack-nova-base-docker:2017-06-19.1  

This gets used by a bunch of the nova services in a shared fashion to generate configs for the shared nova services. In order to match what we are doing upstream is should get set to:

  DockerNovaConfigImage: openstack-nova-compute-docker:2017-06-19.1  

(or equivalent for the downstream containers you are using).

This might help here? It is at least a difference I've seen...
Comment 29 Jon Schlueter 2017-06-21 13:46:40 EDT
changed in the docker-osp12.yaml
Comment 30 Alexander Chuzhoy 2017-06-21 17:03:24 EDT
Re-deployed with:

(undercloud) [stack@undercloud-0 ~]$ cat overcloud_deploy.sh 
#!/bin/bash
openstack overcloud deploy --templates \
    --libvirt-type kvm \
    -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/hostnames.yml \
-e /home/stack/virt/docker-osp12.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/nodes_data.yaml \
--log-file overcloud_deployment_83.log

(undercloud) [stack@undercloud-0 ~]$ grep DockerNovaConfigImage /home/stack/virt/docker-osp12.yaml
  DockerNovaConfigImage: openstack-nova-compute-docker:2017-06-19.1  

(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin@192.168.24.7
Last login: Wed Jun 21 17:00:22 2017 from 192.168.24.1
[heat-admin@compute-0 ~]$ sudo -i
[root@compute-0 ~]# grep -i error /var/log/libvirt/qemu/instance-00000001.log
libvirt:  error : Unable to move /dev/log mount to /var/run/libvirt/qemu/instance-00000001.log: No such file or directory


Same result...
Comment 31 Michal Privoznik 2017-06-22 01:46:09 EDT
(In reply to Alexander Chuzhoy from comment #27)
> after running "mkdir /var/lib/config-data/nova_libvirt/etc/libvirt/secrets",
> was able to start the container nova_libvirt.
> That was build with:
> 
> 
> 
> 
> Dockerfile:
> FROM 192.168.24.1:8787/rhosp12/openstack-nova-libvirt-docker:2017-06-19.1
> RUN yum localinstall -y

Have you restarted libvirtd after updating libvirt? Otherwise you're still running the old daemon which has the bug.

> Running instance in overcloud still results in error.
> 
> The log is mounted this way:
> /var/log/libvirt/qemu:/var/log/libvirt/qemu:ro",
> 
> [root@compute-0 ~]# grep -i error
> /var/log/libvirt/qemu/instance-00000002.log 
> libvirt:  error : Unable to move /dev/log mount to
> /var/run/libvirt/qemu/instance-00000002.log: No such file or directory

Awesome, we finally have a log file. Can you attach it please? It's important for debugging libvirt to attach libvirt logs. In general, for debugging X it's usually helpful to have X's logs.

> 
> 
> 
> All mounts for container:
>       "volumes": [
>         "/etc/hosts:/etc/hosts:ro",
>         "/etc/localtime:/etc/localtime:ro",
>         "/etc/puppet:/etc/puppet:ro",
>         "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro",
>        
> "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro",
>        
> "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.
> crt:ro",
>         "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro",
>         "/dev/log:/dev/log",

So /dev/log exists on the host. I assume this is some Fedora/RHEL based machine and this /dev/log is a mount bind of a systemd socket. Am I right? What's the output of "stat /dev/log" ran from the host?
Comment 32 Michal Privoznik 2017-06-22 12:20:07 EDT
I've talked to sasha on IRC, we've debugged the issue together and confirmed that my patches fix the problem. Therefore, I've posted them on the list:

https://www.redhat.com/archives/libvir-list/2017-June/msg00940.html
Comment 33 Alexander Chuzhoy 2017-06-22 12:32:19 EDT
Yes, installing the rpms from comment #19 worked. Need to workaround 2 other issues reported here:
https://bugzilla.redhat.com/show_bug.cgi?id=1464212
https://bugzilla.redhat.com/show_bug.cgi?id=1464182 

And the vm shows active.
Comment 34 Sven Anderson 2017-06-23 05:26:20 EDT
*** Bug 1463625 has been marked as a duplicate of this bug. ***
Comment 35 yafu 2017-07-03 03:39:59 EDT
Reproduced the bug purely from libvirt with test steps in the patch:
https://www.redhat.com/archives/libvir-list/2017-June/msg00942.html.

pkg version:
libvirt-3.2.0-14.el7.x86_64

Reproduce steps:
1.#socat unix-listen:/tmp/soket stdio
2.#touch /dev/socket
3.#mount --bind /tmp/socket /dev/socket
4.#virsh start rhel7.3
error: Failed to start domain rhel7.3
error: internal error: Process exited prior to exec: libvirt:  error : Unable to move /dev/socket mount to /var/run/libvirt/qemu/rhel7.3.socket: No such file or directory
Comment 36 Michal Privoznik 2017-07-11 09:00:47 EDT
I've just pushed the patches upstream:

commit e93d844b90aab2676a1ad910a165c9c83c99de3d
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Mon Jun 19 17:05:31 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    qemu ns: Create chardev backends more frequently

    Currently, the only type of chardev that we create the backend
    for in the namespace is type='dev'. This is not enough, other
    backends might have files under /dev too. For instance channels
    might have a unix socket under /dev (well, bind mounted under
    /dev from a different place).

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>

commit 7976d1a5146bc5117e7125f0cbbfe537248dc630
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Thu Jun 15 17:05:06 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    qemuDomainAttachDeviceMknodRecursive: Support file mount points

    https://bugzilla.redhat.com/show_bug.cgi?id=1462060

    Just like in the previous commit, when attaching a file based
    device which has its source living under /dev (that is not a
    device rather than a regular file), calling mknod() is no help.
    We need to:

    1) bind mount device to some temporary location
    2) enter the namespace
    3) move the mount point to desired place
    4) umount it in the parent namespace from the temporary location

    At the same time, the check in qemuDomainNamespaceSetupDisk makes
    no longer sense. Therefore remove it.

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>

commit 4f05f188de5de0d5764699ba5231a7d20b42dc9c
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Fri Jun 16 13:49:52 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    qemuDomainCreateDeviceRecursive: Support file mount points

    https://bugzilla.redhat.com/show_bug.cgi?id=1462060

    When building a qemu namespace we might be dealing with bare
    regular files. Files that live under /dev. For instance
    /dev/my_awesome_disk:

      <disk type='file' device='disk'>
        <driver name='qemu' type='qcow2'/>
        <source file='/dev/my_awesome_disk'/>
        <target dev='vdc' bus='virtio'/>
      </disk>

      # qemu-img create -f qcow2 /dev/my_awesome_disk 10M

    So far we were mknod()-ing them which is
    obviously wrong. We need to touch the file and bind mount it to
    the original:

    1) touch /var/run/libvirt/qemu/fedora.dev/my_awesome_disk
    2) mount --bind /dev/my_awesome_disk /var/run/libvirt/qemu/fedora.dev/my_awesome_disk

    Later, when the new /dev is built and replaces original /dev the
    file is going to live at expected location.

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>

commit 4fedbac6206ad6c754d936ac39f0b42417b7dbd5
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Thu Jun 15 15:50:39 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    qemuDomainAttachDeviceMknodHelper: Fail on unsupported file type

    Currently, we silently assume that file we are creating in the
    namespace is either a link or a device (character or block one).
    This is not always the case. Therefore instead of doing something
    wrong, claim about unsupported file type.

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>

commit 89921f54cd4775909f30cfb56313ed3afd3945ca
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Mon Jun 19 10:56:20 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    qemuDomainCreateDeviceRecursive: Fail on unsupported file type

    Currently, we silently assume that file we are creating in the
    namespace is either a link or a device (character or block one).
    This is not always the case. Therefore instead of doing something
    wrong, claim about unsupported file type.

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>

commit 4d8a914be00b5d94b7a5ee8dd3a857d26700592d
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Thu Jun 15 15:48:52 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    qemu: Move preserved mount points path generation into a separate function

    This function is going to be used on other places, so
    instead of copying code we can just call the function.

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>

commit 7154917908d9f712a950a2716c4687e57ccb74e7
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Tue Jun 20 17:35:54 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    qemuDomainBuildNamespace: Handle special file mount points

    https://bugzilla.redhat.com/show_bug.cgi?id=1459592

    In 290a00e41d I've tried to fix the process of building a
    qemu namespace when dealing with file mount points. What I
    haven't realized then is that we might be dealing not with just
    regular files but also special files (like sockets). Indeed, try
    the following:

    1) socat unix-listen:/tmp/soket stdio
    2) touch /dev/socket
    3) mount --bind /tmp/socket /dev/socket
    4) virsh start anyDomain

    Problem with my previous approach is that I wasn't creating the
    temporary location (where mount points under /dev are moved) for
    anything but directories and regular files.

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>

commit 37ef8763c4ce085d0ea8e7c18b29a9dd27874291
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Mon Jun 19 16:43:25 2017 +0200
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Tue Jul 11 14:45:15 2017 +0200

    conf: Rename and expose virDomainChrSourceDefPath

    It comes very handy to have source path for chardevs. We already
    have such function: virDomainAuditChardevPath() but it's static
    and has name not suitable for exposing. Moreover, while exposing
    it change its name slightly to virDomainChrSourceDefGetPath.

    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: John Ferlan <jferlan@redhat.com>


v3.5.0-56-ge93d844b9
Comment 46 yafu 2018-01-02 04:32:54 EST
Verified with libvirt-3.9.0-6.el7.x86_64.

Test steps:
1.mount --bind /dev/log /dev/log

2.mount | grep log
devtmpfs on /dev/log type devtmpfs (rw,nosuid,seclabel,size=5899976k,nr_inodes=1474994,mode=755)

3.ll -Z /dev/log
srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log

4.Start a guest:
# virsh start iommu1
Domain iommu1 started

5.Also start guest with mounted other types of file in the /dev, such as: dir,regular file,pipe,link,character file, all the guests can start correctly.
Comment 47 yafu 2018-01-02 04:37:29 EST
Hi,Michal,

After mounting /dev/log more than one time, libvirt can not start guest correctly. Could you help to check this issue please? Thanks a lot.

Test steps:
1.#mount --bind /dev/log /dev/log

2.#mount --bind /dev/log /dev/log

3.#mount | grep log
devtmpfs on /dev/log type devtmpfs (rw,nosuid,seclabel,size=5899976k,nr_inodes=1474994,mode=755)
devtmpfs on /dev/log type devtmpfs (rw,nosuid,seclabel,size=5899976k,nr_inodes=1474994,mode=755)
devtmpfs on /dev/log type devtmpfs (rw,nosuid,seclabel,size=5899976k,nr_inodes=1474994,mode=755)

4.# virsh start iommu1
error: Failed to start domain iommu1
error: internal error: Process exited prior to exec: libvirt: QEMU Driver error : Failed to create /var/run/libvirt/qemu/16-iommu1.log: No such device or address

5.The guest can start successfully when disable qemu namespace.
Comment 48 Michal Privoznik 2018-01-02 08:12:12 EST
It's fixed upstream:

commit 3d899b99653bfa8f0fa275e689d0f5716142f5a5
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Wed Nov 8 15:20:10 2017 +0100
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Wed Nov 8 16:38:26 2017 +0100

    qemu-ns: Detect /dev/* mount point duplicates even better
    
    In 4f1570720218302 I've tried to make duplicates detection for
    nested /dev mount better. However, I've missed the obvious case
    when there are two same mount points. For instance if:
    
      # mount --bind /dev/blah /dev/blah
      # mount --bind /dev/blah /dev/blah
    
    Yeah, very unlikely (in qemu driver world) but possible.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>

However, I'm not sure if we want to misuse this bug to backport the fix. If needed we should create a new one.
Comment 49 yafu 2018-01-03 00:27:31 EST
(In reply to Michal Privoznik from comment #48)
> It's fixed upstream:
> 
> commit 3d899b99653bfa8f0fa275e689d0f5716142f5a5
> Author:     Michal Privoznik <mprivozn@redhat.com>
> AuthorDate: Wed Nov 8 15:20:10 2017 +0100
> Commit:     Michal Privoznik <mprivozn@redhat.com>
> CommitDate: Wed Nov 8 16:38:26 2017 +0100
> 
>     qemu-ns: Detect /dev/* mount point duplicates even better
>     
>     In 4f1570720218302 I've tried to make duplicates detection for
>     nested /dev mount better. However, I've missed the obvious case
>     when there are two same mount points. For instance if:
>     
>       # mount --bind /dev/blah /dev/blah
>       # mount --bind /dev/blah /dev/blah
>     
>     Yeah, very unlikely (in qemu driver world) but possible.
>     
>     Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
> 
> However, I'm not sure if we want to misuse this bug to backport the fix. If
> needed we should create a new one.

Thanks, Michal. File a new bug to track the issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1530451
Comment 50 yafu 2018-01-03 00:29:47 EST
According to comment 35, comment 47, move the bug to verified.
Comment 54 errata-xmlrpc 2018-04-10 06:46:43 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0704

Note You need to log in before you can comment on or make changes to this bug.