Bug 1285836 - [ppc64le] VM fails to start on: libvirtError: internal error: process exited while connecting to monitor due to AVC denial
[ppc64le] VM fails to start on: libvirtError: internal error: process exited ...
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.6.0
ppc64le Unspecified
high Severity high
: ovirt-3.6.2
: 3.6.0
Assigned To: Adam Litke
Aharon Canan
storage
:
Depends On:
Blocks: RHEV3.6PPC
  Show dependency treegraph
 
Reported: 2015-11-26 10:46 EST by Elad
Modified: 2016-02-21 06:09 EST (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 09:34:10 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
audit.log engine.log libvirtd.log messages vdsm.log sanlock.log vm-3.log (qemu) (1.24 MB, application/x-gzip)
2015-11-26 10:46 EST, Elad
no flags Details

  None (edit)
Description Elad 2015-11-26 10:46:53 EST
Created attachment 1099331 [details]
audit.log  engine.log  libvirtd.log  messages  vdsm.log  sanlock.log vm-3.log (qemu)

Description of problem:
Start VM fails with libvirt error.


Version-Release number of selected component (if applicable):

Red Hat Enterprise Linux Server release 7.2 (Maipo)
Linux ibm-p8-rhevm-03.rhts.eng.bos.redhat.com 3.10.0-327.2.1.el7.ppc64le #1 SMP Fri Nov 13 14:35:26 EST 2015 ppc64le ppc64le ppc64le GNU/Linux
vdsm-python-4.17.10.1-0.el7ev.noarch
vdsm-yajsonrpc-4.17.10.1-0.el7ev.noarch
vdsm-4.17.10.1-0.el7ev.noarch
vdsm-jsonrpc-4.17.10.1-0.el7ev.noarch
vdsm-cli-4.17.10.1-0.el7ev.noarch
vdsm-xmlrpc-4.17.10.1-0.el7ev.noarch
vdsm-infra-4.17.10.1-0.el7ev.noarch
libvirt-daemon-driver-storage-1.2.17-13.el7_2.2.ppc64le
libvirt-docs-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-secret-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.2.ppc64le
libvirt-devel-1.2.17-13.el7_2.2.ppc64le
libvirt-lock-sanlock-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-network-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-1.2.17-13.el7_2.2.ppc64le
libvirt-login-shell-1.2.17-13.el7_2.2.ppc64le
libvirt-python-1.2.17-2.el7.ppc64le
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-interface-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-config-network-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-lxc-1.2.17-13.el7_2.2.ppc64le
libvirt-debuginfo-1.2.17-13.el7_2.2.ppc64le
libvirt-client-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-kvm-1.2.17-13.el7_2.2.ppc64le
qemu-kvm-tools-rhev-2.3.0-31.el7_2.3.ppc64le
qemu-kvm-common-rhev-2.3.0-31.el7_2.3.ppc64le
ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch
qemu-kvm-rhev-2.3.0-31.el7_2.3.ppc64le
qemu-kvm-rhev-debuginfo-2.3.0-31.el7_2.3.ppc64le
qemu-img-rhev-2.3.0-31.el7_2.3.ppc64le

How reproducible:
Not always

Steps to Reproduce:
1. Start VM with a disk attached (resides on FC in my case). 
Occurs both with and without OS installed on the guest.


Actual results:
Start VM fails:

vdsm.log:

Thread-134::ERROR::2015-11-26 10:33:38,975::vm::751::virt.vm::(_startUnderlyingVm) vmId=`6b9a4548-0c6b-456a-a01e-8659b9eabcae`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 695, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1868, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: early end of file from monitor: possible problem:
2015-11-26T15:33:38.525160Z qemu-kvm: -drive file=/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef
6d796d625,if=none,id=drive-virtio-disk0,format=qcow2,serial=c87333fd-59c8-4121-a932-0c51f4b20d51,cache=none,werror=stop,rerror=stop,aio=native: Could not open '/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a
6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625': Permission denied
Thread-134::INFO::2015-11-26 10:33:38,977::vm::1278::virt.vm::(setDownStatus) vmId=`6b9a4548-0c6b-456a-a01e-8659b9eabcae`::Changed state to Down: internal error: early end of file from monitor: possible problem:
2015-11-26T15:33:38.525160Z qemu-kvm: -drive file=/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef
6d796d625,if=none,id=drive-virtio-disk0,format=qcow2,serial=c87333fd-59c8-4121-a932-0c51f4b20d51,cache=none,werror=stop,rerror=stop,aio=native: Could not open '/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a
6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625': Permission denied (code=1)
Thread-134::DEBUG::2015-11-26 10:33:38,977::vmchannels::214::vds::(unregister) Delete fileno 33 from listener.
Thread-134::DEBUG::2015-11-26 10:33:38,978::__init__::206::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 43034928940, "6b9a4548-0c6b-456a-a01e-8659b9eabcae": {"status": "Down", "timeOffset"
: "0", "exitReason": 1, "exitMessage": "internal error: early end of file from monitor: possible problem:\n2015-11-26T15:33:38.525160Z qemu-kvm: -drive file=/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d
63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625,if=none,id=drive-virtio-disk0,format=qcow2,serial=c87333fd-59c8-4121-a932-0c51f4b20d51,cache=non
e,werror=stop,rerror=stop,aio=native: Could not open '/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd
-2ef6d796d625': Permission denied", "exitCode": 1}}, "jsonrpc": "2.0", "method": "|virt|VM_status|6b9a4548-0c6b-456a-a01e-8659b9eabcae"}


libvirtd.log:
2015-11-26 15:33:38.712+0000: 125861: debug : qemuMonitorIO:773 : Error on monitor internal error: early end of file from monitor: possible problem:
2015-11-26T15:33:38.525160Z qemu-kvm: -drive file=/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef
6d796d625,if=none,id=drive-virtio-disk0,format=qcow2,serial=c87333fd-59c8-4121-a932-0c51f4b20d51,cache=none,werror=stop,rerror=stop,aio=native: Could not open '/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a
6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625': Permission denied


[root@ibm-p8-rhevm-03 c87333fd-59c8-4121-a932-0c51f4b20d51]# vdsClient -s 0 getVolumeInfo d63ce57c-f46d-4771-b4bb-0b490693cb0e 2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6 c87333fd-59c8-4121-a932-0c51f4b20d51 308fe147-7e4f-48eb-b0fd-2ef6d796d625
        status = OK
        domain = d63ce57c-f46d-4771-b4bb-0b490693cb0e
        capacity = 1073741824
        voltype = LEAF
        description = {"DiskAlias":"vm-3_Disk1","DiskDescription":""}
        parent = 00000000-0000-0000-0000-000000000000
        format = COW
        image = c87333fd-59c8-4121-a932-0c51f4b20d51
        uuid = 308fe147-7e4f-48eb-b0fd-2ef6d796d625
        disktype = 2
        legality = LEGAL
        mtime = 0
        apparentsize = 1073741824
        truesize = 1073741824
        type = SPARSE
        children = []
        pool = 
        ctime = 1448549210



[root@ibm-p8-rhevm-03 images]# tree /rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images
/rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images
|-- 3134d60a-ad5a-4306-9c1a-04f22d34d303
|   `-- 010e53b0-a6d8-49fb-a9ca-040a2d393e10 -> /dev/d63ce57c-f46d-4771-b4bb-0b490693cb0e/010e53b0-a6d8-49fb-a9ca-040a2d393e10
|-- 469e9700-5a07-42b4-a0c2-756af5779e14 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/469e9700-5a07-42b4-a0c2-756af5779e14
|-- 59757eab-2d32-491b-9c85-49d85a6b5988 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/59757eab-2d32-491b-9c85-49d85a6b5988
|-- 5a036799-0762-41b7-90e0-df5e4bdf1115 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/5a036799-0762-41b7-90e0-df5e4bdf1115
|-- 65f29daf-bc7e-4f9c-990e-4abca03347ab -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/65f29daf-bc7e-4f9c-990e-4abca03347ab
|-- 776e1c4d-b3fe-4ae6-bc90-97a8ffec4636 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/776e1c4d-b3fe-4ae6-bc90-97a8ffec4636
|-- 832106ea-2272-4e2f-999a-8b6c7a737bbe -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/832106ea-2272-4e2f-999a-8b6c7a737bbe
|-- 9152e218-9900-4751-a95a-14c36224d4b5 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/9152e218-9900-4751-a95a-14c36224d4b5
|-- 9c3b1a8c-501c-4c80-9ade-e792fcfe6e63 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/9c3b1a8c-501c-4c80-9ade-e792fcfe6e63
|-- aed59189-5a1e-401e-802e-53d2b9e3d587 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/aed59189-5a1e-401e-802e-53d2b9e3d587
|-- b6cb13b7-538c-47f1-821f-962fcc5d11ac -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/b6cb13b7-538c-47f1-821f-962fcc5d11ac
|-- bd00d15a-6186-4f0e-b07a-920d8b6e4c31 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/bd00d15a-6186-4f0e-b07a-920d8b6e4c31
|-- be526d04-b073-46ed-819f-f86beed5a3a9 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/be526d04-b073-46ed-819f-f86beed5a3a9
|-- c72e22eb-e853-48fb-994d-f5f1867c103e -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/c72e22eb-e853-48fb-994d-f5f1867c103e
|-- c87333fd-59c8-4121-a932-0c51f4b20d51
|   `-- 308fe147-7e4f-48eb-b0fd-2ef6d796d625 -> /dev/d63ce57c-f46d-4771-b4bb-0b490693cb0e/308fe147-7e4f-48eb-b0fd-2ef6d796d625
|-- ca16a397-74ae-4c2a-8520-663161acd20c -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/ca16a397-74ae-4c2a-8520-663161acd20c
|-- d2ee57ac-86d0-4c33-850b-5055a94a6643 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/d2ee57ac-86d0-4c33-850b-5055a94a6643
|-- d3c1573e-f887-4b0c-84fd-775216cd1a3b -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/d3c1573e-f887-4b0c-84fd-775216cd1a3b
|-- db65783a-7a77-4cb3-9cc8-795a68872dd5 -> /var/run/vdsm/storage/d63ce57c-f46d-4771-b4bb-0b490693cb0e/db65783a-7a77-4cb3-9cc8-795a68872dd5
|-- dfafa670-ef0e-4002-9220-e08986670fd4
|   `-- a6d1ed0b-e06c-4ee9-aa3f-d70d2d88f527 -> /dev/d63ce57c-f46d-4771-b4bb-0b490693cb0e/a6d1ed0b-e06c-4ee9-aa3f-d70d2d88f527
`-- f56e8293-bccd-468f-a14e-3dbf935c3b6b
    `-- b5138968-b48c-4b17-83d2-814b6055132e -> /dev/d63ce57c-f46d-4771-b4bb-0b490693cb0e/b5138968-b48c-4b17-83d2-814b6055132e




Expected results:
VM should start start successfully

Additional info: 
audit.log  engine.log  libvirtd.log  messages  vdsm.log  sanlock.log vm-3.log (qemu)
Comment 1 Dan Kenigsberg 2015-11-26 12:12:11 EST
If you

  vdsClient -s 0 prepareImage c87333fd-59c8-4121-a932-0c51f4b20d51

What are the permissions on

  ls -lH /rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a
6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625

it smell more of a storage issue than virt.
Comment 2 Elad 2015-11-26 13:45:42 EST
Dan, 

[root@ibm-p8-rhevm-03 ~]# vdsClient -s 0 prepareImage 2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6 d63ce57c-f46d-4771-b4bb-0b490693cb0e c87333fd-59c8-4121-a932-0c51f4b20d51 308fe147-7e4f-48eb-b0fd-2ef6d796d625
{'domainID': 'd63ce57c-f46d-4771-b4bb-0b490693cb0e',
 'imageID': 'c87333fd-59c8-4121-a932-0c51f4b20d51',
 'leaseOffset': 114294784,
 'leasePath': '/dev/d63ce57c-f46d-4771-b4bb-0b490693cb0e/leases',
 'path': '/rhev/data-center/mnt/blockSD/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625',
 'volType': 'path',
 'volumeID': '308fe147-7e4f-48eb-b0fd-2ef6d796d625'}

[root@ibm-p8-rhevm-03 ~]# ls -lH /rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625
brw-rw----. 1 vdsm qemu 253, 13 Nov 26 13:44 /rhev/data-center/2a0ea7f8-bafe-4b0b-a5a1-4ce1a33271a6/d63ce57c-f46d-4771-b4bb-0b490693cb0e/images/c87333fd-59c8-4121-a932-0c51f4b20d51/308fe147-7e4f-48eb-b0fd-2ef6d796d625
Comment 3 Dan Kenigsberg 2015-11-27 05:31:01 EST
type=AVC msg=audit(1448552018.514:12864): avc:  denied  { read } for  pid=128394 comm="qemu-kvm" name="308fe147-7e4f-48eb-b0fd-2ef6d796d625" dev="dm-0" ino=201460833 scontext=system_u:system_r:svirt_t:s0:c27,c480 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
Comment 4 Michal Skrivanek 2015-11-27 09:44:44 EST
labeling/udev issue?

@Elad, please use Hardware field correctly
Comment 5 Red Hat Bugzilla Rules Engine 2015-11-29 06:29:03 EST
This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset.
Please set the correct milestone or add the z-stream flag.
Comment 6 Adam Litke 2015-11-30 11:08:49 EST
$ grep 'Kernel command line' messages
Nov 25 09:53:29 ibm-p8-rhevm-03 kernel: Kernel command line: root=/dev/mapper/rhel--l_ibm--p8--rhevm--03-root ro crashkernel=auto rd.lvm.lv=rhel-l_ibm-p8-rhevm-03/root console=tty0 selinux=0 rd.lvm.lv=rhel-l_ibm-p8-rhevm-03/swap rhgb quiet LANG=en_US.UTF-8 systemd.log_level=debug systemd.log_target=kmsg
Nov 25 10:32:05 ibm-p8-rhevm-03 kernel: Kernel command line: root=/dev/mapper/rhel--l_ibm--p8--rhevm--03-root ro crashkernel=auto rd.lvm.lv=rhel-l_ibm-p8-rhevm-03/root console=tty0 selinux=0 rd.lvm.lv=rhel-l_ibm-p8-rhevm-03/swap rhgb quiet LANG=en_US.UTF-8 systemd.log_level=debug systemd.log_target=kmsg
Nov 25 10:47:44 ibm-p8-rhevm-03 kernel: Kernel command line: root=/dev/mapper/rhel--l_ibm--p8--rhevm--03-root ro crashkernel=auto rd.lvm.lv=rhel-l_ibm-p8-rhevm-03/root console=tty0 selinux=1 rd.lvm.lv=rhel-l_ibm-p8-rhevm-03/swap rhgb quiet LANG=en_US.UTF-8 systemd.log_level=debug systemd.log_target=kmsg


As you can see, the system had selinux DISABLED until the third boot.  This behavior can result in a mislabled filesystem since selinux contexts are not applied to files while selinux is disabled.

With SELinux enabled and vdsm stopped, remove stale files from the /rhev directory (being careful not to remove the contents of any mounted NFS shares).  Then restart vdsm, wait for the DC to become operational and try again to run the VM.
Comment 7 Yaniv Kaul 2015-12-03 09:21:01 EST
Elad, based on above comment - any mislabeling? does it happen all the time?
Comment 8 Elad 2015-12-03 09:28:42 EST
It does not happen all the time, it happened with a specific host which was indeed configured with selinux=disabled and was set to enabled before the bug first occurrence.
Comment 9 Yaniv Kaul 2015-12-03 09:34:10 EST
(In reply to Elad from comment #8)
> It does not happen all the time, it happened with a specific host which was
> indeed configured with selinux=disabled and was set to enabled before the
> bug first occurrence.

Closing the bug per the comment above. Mislabeling happens.

Note You need to log in before you can comment on or make changes to this bug.