Bug 1297760 - [ppc64le] qemu-kvm permission denied to access image on iscsi domain (unable to run the vm)
[ppc64le] qemu-kvm permission denied to access image on iscsi domain (unable ...
Status: CLOSED DUPLICATE of bug 1297765
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage (Show other bugs)
3.6.1
ppc64le Linux
unspecified Severity high (vote)
: ovirt-4.0.0-alpha
: 4.0.0
Assigned To: Allon Mureinik
Carlos Mestre González
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-12 07:20 EST by Carlos Mestre González
Modified: 2016-02-10 11:53 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-14 07:56:58 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amureini: ovirt‑4.0.0?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

  None (edit)
Description Carlos Mestre González 2016-01-12 07:20:34 EST
Description of problem:
One of the host in the cluster seems not able to run vms or to hotplug with ISCSI disks. The other host in the cluster *works fine*, I've checked the packages and there's doesn't seem to be an issue with it. Also the host works as SPM for typical operations, migration of disks, adding domains, ...

Packages are the proper ones for the release. Wonder if you guys can take a look.


Version-Release number of selected component (if applicable):
rhevm-3.6.1.3-0.1.el6.noarch
libvirt-daemon-kvm-1.2.17-13.el7_2.2.ppc64le
libvirt-docs-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.2.ppc64le
libvirt-lock-sanlock-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.2.ppc64le
libvirt-python-1.2.17-2.el7.ppc64le
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-interface-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-config-network-1.2.17-13.el7_2.2.ppc64le
libvirt-debuginfo-1.2.17-13.el7_2.2.ppc64le
libvirt-client-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-storage-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-secret-1.2.17-13.el7_2.2.ppc64le
libvirt-devel-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-network-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-login-shell-1.2.17-13.el7_2.2.ppc64le
qemu-img-rhev-2.3.0-31.el7_2.4.ppc64le
ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.ppc64le
qemu-kvm-tools-rhev-2.3.0-31.el7_2.4.ppc64le
qemu-kvm-rhev-2.3.0-31.el7_2.4.ppc64le
qemu-kvm-common-rhev-2.3.0-31.el7_2.4.ppc64le
vdsm-jsonrpc-4.17.13-1.el7ev.noarch
vdsm-xmlrpc-4.17.13-1.el7ev.noarch
vdsm-python-4.17.13-1.el7ev.noarch
vdsm-4.17.13-1.el7ev.noarch
vdsm-infra-4.17.13-1.el7ev.noarch
vdsm-yajsonrpc-4.17.13-1.el7ev.noarch
vdsm-cli-4.17.13-1.el7ev.noarch


How reproducible:
100%

Steps to Reproduce:
1. Create a vm with a boot disk on the iscsi domain (or use an already created one)
2. Try to start the vm

Actual results:
Thread-23957::ERROR::2016-01-12 05:34:03,418::vm::758::virt.vm::(_startUnderlyingVm) vmId=`ccfc6e2b-60dc-4b29-a10f-ddc6d00b1c99`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 702, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1889, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: process exited while connecting to monitor: 2016-01-12T10:34:03.210899Z qemu-kvm: -drive file=/rhev/data-center/b3115183-d522-428b-9dce-2809fe39a79d/bc7ac735-26d4-4bbd-a45b-0ac909896d00/images/9b158596-e5fe-40d5-95ce-da802a07756a/1ec33a65-7728-4259-a36b-9c1508907e35,if=none,id=drive-virtio-disk1,format=qcow2,serial=9b158596-e5fe-40d5-95ce-da802a07756a,cache=none,werror=stop,rerror=stop,aio=native: Could not open '/rhev/data-center/b3115183-d522-428b-9dce-2809fe39a79d/bc7ac735-26d4-4bbd-a45b-0ac909896d00/images/9b158596-e5fe-40d5-95ce-da802a07756a/1ec33a65-7728-4259-a36b-9c1508907e35': Permission denied

Also regarding the hotplug:
Steps to Reproduce:
1. Use a vm with a boot disk on an nfs domain and start it
2. Hotplug a iscsi disk (in VMs -> Disks -> New)

Actual results:
Disk is added but fails to hotplug with:
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (ajp-/127.0.0.1:8702-2) [5ab6ea5f] Failed in 'HotPlugDiskVDS' method
2016-01-10 02:37:01,532 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-2) [5ab6ea5f] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk0' could not be initialized


Additional info:
- The host doesn't seem to have any issue with nfs or to handle iscsi domains (add them/remove, create disks/migrate, ...)
- Other host in the cluster doesn't have any issue like this one
Comment 1 Allon Mureinik 2016-01-13 05:13:46 EST
Carlos, what's the difference between this BZ and bug 1297760 ? Looks like a double-submit issue, offhand.
Comment 2 Liron Aravot 2016-01-13 06:19:51 EST
Carlos, in addition to Allons comment - if there's a hotplug issue let's tackle it in a different bug-
please keep in mind that we support hotplugging disks only to vms with os installed.
Comment 3 Allon Mureinik 2016-01-14 07:56:58 EST

*** This bug has been marked as a duplicate of bug 1297765 ***

Note You need to log in before you can comment on or make changes to this bug.