Bug 842146 - 3.1 - [Storage][Text] Not informative error message when trying to attach import domain with wrong permissions.
3.1 - [Storage][Text] Not informative error message when trying to attach imp...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
6.3
Unspecified Unspecified
unspecified Severity medium
: rc
: 6.3
Assigned To: Oved Ourfali
Haim
Storage
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-07-22 10:35 EDT by Leonid Natapov
Modified: 2012-12-04 14:03 EST (History)
13 users (show)

See Also:
Fixed In Version: vdsm-4.9.6-32.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-04 14:03:17 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Leonid Natapov 2012-07-22 10:35:59 EDT
[Storage] Failure on attachStorageDomain on cannot acquire lock when importing Storage Domain. Import storage domain does work, though it failed on attachStorageDomain on cannot acquire lock. 
Here is vdsm.log


How to reproduce:
Try to import Storage Domain.
I used the following path
orion.qa.lab.tlv.redhat.com:/export/shared_iso_domain_backup/shared_iso_domain1

---

Thread-517:EBUG::2012-07-22 14:49:36,577::__init__::1164::Storage.Misc.excCmd:_log) '/usr/bin/sudo -n /usr/bin/setsid /usr/bin/ionice -c1 -n0 /bin/su vdsm -s /bin/sh -c "/usr/libexec/vdsm/spmprotect.sh star
t 7233a711-98e8-4c3c-bcfa-44c4bcc4f6c6 2 5 /rhev/data-center/mnt/orion.qa.lab.tlv.redhat.com:_export_shared__iso__domain__backup_shared__iso__domain1/7233a711-98e8-4c3c-bcfa-44c4bcc4f6c6/dom_md/leases 5000 1000
 3"' (cwd /usr/libexec/vdsm)
Thread-517:EBUG::2012-07-22 14:49:36,683::__init__::1164::Storage.Misc.excCmd:_log) FAILED: <err> = ''; <rc> = 1
Thread-517::ERROR::2012-07-22 14:49:36,684::task::853::TaskManager.Task:_setError) Task=`b49ec5e3-a33b-4f45-8f71-23a4cedf258d`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 957, in attachStorageDomain
    pool.attachSD(sdUUID)
  File "/usr/share/vdsm/storage/securable.py", line 63, in wrapper
    return f(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 912, in attachSD
    dom.acquireClusterLock(self.id)
  File "/usr/share/vdsm/storage/sd.py", line 432, in acquireClusterLock
    self._clusterLock.acquire(hostID)
  File "/usr/share/vdsm/storage/safelease.py", line 108, in acquire
    raise se.AcquireLockFailure(self._sdUUID, rc, out, err)
AcquireLockFailure: Cannot obtain lock: "id=7233a711-98e8-4c3c-bcfa-44c4bcc4f6c6, rc=1, out=['error - lease file does not exist or is not writeable', 'usage: /usr/libexec/vdsm/spmprotect.sh COMMAND PARAMETERS', 'Commands:', '  start { spUUID hostId renewal_interval_sec lease_path[:offset] lease_time_ms io_op_timeout_ms fail_retries }', 'Parameters:', '  spUUID -                pool uuid', '  hostId -                host id in pool', '  renewal_interval_sec -  intervals for lease renewals attempts', '  lease_path -            path to lease file/volume', '  offset -                offset of lease within file', '  lease_time_ms -         time limit within which lease must be renewed (at least 2*renewal_interval_sec)', '  io_op_timeout_ms -      I/O operation timeout', '  fail_retries -          Maximal number of attempts to retry to renew the lease before fencing (<= lease_time_ms/renewal_interval_sec)'], err=[]"
Comment 1 Haim 2012-07-22 10:42:03 EDT
sanlock-2.3-2.1.el6.x86_64
Comment 4 Ayal Baron 2012-07-22 16:56:43 EDT
(In reply to comment #1)
> sanlock-2.3-2.1.el6.x86_64

Looking at the output above, this has nothing to do with sanlock (spmprotect uses safelease).
Comment 6 Leonid Natapov 2012-07-23 09:58:56 EDT
The prblem was that for some reason the permissions on export domain were changed from vdsm:kvm to root:root. That's why was the failure in attaching shared iso domain. At the same time the error message that was given to user was not informative without any explanation what the problem might be. The message was:

Error while executing action Attach Storage Domain: Could not obtain lock

Moving this BZ to backend in order to fix the message:
Comment 7 Ayal Baron 2012-08-08 05:44:06 EDT
Oved, dup of: 782942 ?
Comment 8 Oved Ourfali 2012-08-08 07:20:59 EDT
(In reply to comment #7)
> Oved, dup of: 782942 ?

Not sure. The description sounds similar, but the reported error is different:
in 782942 "Cannot connect server to Storage' error shown."
in this bug "cannot acquire lock when importing Storage Domain".

Will need to test that (have other urgent bug I'm working on now, so I may get to it only next week).
Comment 9 Oved Ourfali 2012-08-20 09:42:08 EDT
Patch posted to gerrit:
http://gerrit.ovirt.org/#/c/7339/
Comment 11 Oved Ourfali 2012-08-27 11:07:02 EDT
This bug is caused by VDSM, so moving the bug to the correct component.
Comment 14 Allon Mureinik 2012-08-29 06:07:47 EDT
moved to ON_QA by mistake - only the engine build containing the fix was released, still pending VDSM release.
Comment 16 Haim 2012-09-05 09:12:11 EDT
verified on SI17 vdsm-4.9.6-32.0:

The error code for connection orion.qa.lab.tlv.redhat.com:/export/shared_iso_domain_backup/shared_iso_domain1 returned by VDSM was following Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.
Comment 19 errata-xmlrpc 2012-12-04 14:03:17 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html

Note You need to log in before you can comment on or make changes to this bug.