Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 850440

Summary: The MODE definitions in vdsm-lvm.rules deny the access to sanlock
Product: [Retired] oVirt Reporter: Federico Simoncelli <fsimonce>
Component: vdsmAssignee: Federico Simoncelli <fsimonce>
Status: CLOSED CURRENTRELEASE QA Contact: Haim <hateya>
Severity: high Docs Contact:
Priority: high    
Version: unspecifiedCC: abaron, acathrow, amureini, bazulay, dyasny, iheim, mgoldboi, yeylon, ykaul
Target Milestone: ---   
Target Release: 3.3.4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-21 09:46:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Federico Simoncelli 2012-08-21 15:17:25 UTC
Description of problem:
udev is now taking in account the MODE definitions that we use in 12-vdsm-lvm.rules (MODE:="0600").
I suppose that previously this wasn't working (I also verified it on older versions):

$ grep FIXME vdsm/storage/12-vdsm-lvm.rules
# FIXME: make special lvs vdsm-only readable (MODE doesn't work)

Therefore the ids/leases LVs are now reachable only by the vdsm user:

# ls -l /dev/dm-23 
brw------- 1 vdsm qemu 253, 23 Aug 21 16:57 /dev/dm-23

This denies the access to such LVs to sanlock:

Thread-2192::ERROR::2012-08-21 17:01:11,169::task::833::TaskManager.Task::(_setError) Task=`7cfe39d3-6a6e-45f4-a2c3-a6ec52fa7209`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 840, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 785, in createStoragePool
    return sp.StoragePool(spUUID, self.taskMng).create(poolName, masterDom, domList, masterVersion, safeLease)
  File "/usr/share/vdsm/storage/sp.py", line 565, in create
    self._acquireTemporaryClusterLock(msdUUID, safeLease)
  File "/usr/share/vdsm/storage/sp.py", line 506, in _acquireTemporaryClusterLock
    msd.acquireHostId(self.id)
  File "/usr/share/vdsm/storage/sd.py", line 427, in acquireHostId
    self._clusterLock.acquireHostId(hostId, async)
  File "/usr/share/vdsm/storage/safelease.py", line 175, in acquireHostId
    raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: ('df0acb48-5d97-4878-93b5-6a26ba2ae971', SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))

2012-08-21 16:57:27+0300 3360 [1442]: s5 lockspace df0acb48-5d97-4878-93b5-6a26ba2ae971:250:/dev/df0acb48-5d97-4878-93b5-6a26ba2ae971/ids:0
2012-08-21 16:57:27+0300 3360 [5259]: open error -13 /dev/df0acb48-5d97-4878-93b5-6a26ba2ae971/ids
2012-08-21 16:57:27+0300 3360 [5259]: s5 open_disk /dev/df0acb48-5d97-4878-93b5-6a26ba2ae971/ids error -13
2012-08-21 17:01:10+0300 3583 [1440]: s6 lockspace df0acb48-5d97-4878-93b5-6a26ba2ae971:250:/dev/df0acb48-5d97-4878-93b5-6a26ba2ae971/ids:0
2012-08-21 17:01:10+0300 3583 [5566]: open error -13 /dev/df0acb48-5d97-4878-93b5-6a26ba2ae971/ids
2012-08-21 17:01:10+0300 3583 [5566]: s6 open_disk /dev/df0acb48-5d97-4878-93b5-6a26ba2ae971/ids error -13


Version-Release number of selected component (if applicable):
udev-182-3.fc17.x86_64

Comment 1 Federico Simoncelli 2012-08-24 12:14:18 UTC
A patch has been proposed upstream:

Update the lvm rules permissions with sanlock

The MODE keyword in the lvm rules wasn't used by udev. However a newer
version shipped with Fedora 17 is considering its value.
The MODE and the GROUP values should be updated to allow sanlock to
access the ids and the leases LVs.

RHBZ: 850440

Change-Id: Idfca402c4264788a4e01349cc72e5fa3587b6222
Signed-off-by: Federico Simoncelli <fsimonce>

http://gerrit.ovirt.org/#/c/7446/

Comment 2 Federico Simoncelli 2013-02-21 09:46:29 UTC
VDSM is setting the correct permissions on the LVs since vdsm-4.10.0-7.fc17.